title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Images
|
Images OpenShift Container Platform 4.10 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/index
|
Chapter 6. Migrating custom providers
|
Chapter 6. Migrating custom providers Similarly to the Red Hat Single Sign-On 7.6, custom providers are deployed to the Red Hat build of Keycloak by copying them to a deployment directory. In the Red Hat build of Keycloak, copy your providers to the providers directory instead of standalone/deployments , which no longer exists. Additional dependencies should also be copied to the providers directory. Red Hat build of Keycloak does not use a separate classpath for custom providers, so you may need to be more careful with additional dependencies that you include. In addition, the EAR and WAR packaging formats, and jboss-deployment-structure.xml files, are no longer supported. While Red Hat Single Sign-On 7.6 automatically discovered custom providers, and even supported the ability to hot-deploy custom providers while Keycloak is running, this behavior is no longer supported. Also, after you make a change to the providers or dependencies in the providers directory, you have to do a build or restart the server with the auto build feature. Depending on what APIs your providers use you may also need to make some changes to the providers. See the following sections for details. 6.1. Transition from Java EE to Jakarta EE Keycloak migrated its codebase from Java EE (Enterprise Edition) to Jakarta EE, which brought various changes. We have upgraded all Jakarta EE specifications in order to support Jakarta EE 10, such as: Jakarta Persistence 3.1 Jakarta RESTful Web Services 3.1 Jakarta Mail API 2.1 Jakarta Servlet 6.0 Jakarta Activation 2.1 Jakarta EE 10 provides a modernized, simplified, lightweight approach to building cloud-native Java applications. The main changes provided within this initiative are changing the namespace from javax.* to jakarta.* . This change does not apply for javax.* packages provided directly in the JDK, such as javax.security , javax.net , javax.crypto , etc. In addition, Jakarta EE APIs like session/stateless beans are no longer supported. 6.2. Removed third party dependencies Some dependencies were removed in Red Hat build of Keycloak including openshift-rest-client okio-jvm okhttp commons-lang commons-compress jboss-dmr kotlin-stdlib Also, since Red Hat build of Keycloak is no longer based on EAP, most of the EAP dependencies were removed. This change means that if you use any of these libraries as dependencies of your own providers deployed to the Red Hat build of Keycloak, you may also need to copy those JAR files explicitly to the Keycloak distribution providers directory. 6.3. Context and dependency injection are no longer enabled for JAX-RS Resources To provide a better runtime and leverage as much as possible the underlying stack, all injection points for contextual data using the javax.ws.rs.core.Context annotation were removed. The expected improvement in performance involves no longer creating proxies instances multiple times during the request lifecycle, and drastically reducing the amount of reflection code at runtime. If you need access to the current request and response objects, you can now obtain their instances directly from the KeycloakSession : @Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response; was replaced by: KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse(); Additional contextual data can be obtained from the runtime through the KeycloakContext instance: KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class); 6.4. Deprecated methods from data providers and models Some previously deprecated methods are now removed in Red Hat build of Keycloak: RealmModel#searchForGroupByNameStream(String, Integer, Integer) UserProvider#getUsersStream(RealmModel, boolean) UserSessionPersisterProvider#loadUserSessions(int, int, boolean, int, String) Interfaces added for Streamification work, such as RoleMapperModel.Streams and similar KeycloakModelUtils#getClientScopeMappings Deprecated methods from KeycloakSession UserQueryProvider#getUsersStream methods Also, these other changes were made: Some methods from UserSessionProvider were moved to UserLoginFailureProvider . Streams interfaces in federated storage provider classes were deprecated. Streamification - interfaces now contain only Stream-based methods. For example in GroupProvider interface @Deprecated List<GroupModel> getGroups(RealmModel realm); was replaced by Stream<GroupModel> getGroupsStream(RealmModel realm); Consistent parameter ordering - methods now have strict parameter ordering where RealmModel is always the first parameter. For example in UserLookupProvider interface: @Deprecated UserModel getUserById(String id, RealmModel realm); was replaced by UserModel getUserById(RealmModel realm, String id) 6.4.1. List of changed interfaces ( o.k. stands for org.keycloak. package) server-spi module o.k.credential.CredentialInputUpdater o.k.credential.UserCredentialStore o.k.models.ClientProvider o.k.models.ClientSessionContext o.k.models.GroupModel o.k.models.GroupProvider o.k.models.KeyManager o.k.models.KeycloakSessionFactory o.k.models.ProtocolMapperContainerModel o.k.models.RealmModel o.k.models.RealmProvider o.k.models.RoleContainerModel o.k.models.RoleMapperModel o.k.models.RoleModel o.k.models.RoleProvider o.k.models.ScopeContainerModel o.k.models.UserCredentialManager o.k.models.UserModel o.k.models.UserProvider o.k.models.UserSessionProvider o.k.models.utils.RoleUtils o.k.sessions.AuthenticationSessionProvider o.k.storage.client.ClientLookupProvider o.k.storage.group.GroupLookupProvider o.k.storage.user.UserLookupProvider o.k.storage.user.UserQueryProvider server-spi-private module o.k.events.EventQuery o.k.events.admin.AdminEventQuery o.k.keys.KeyProvider 6.4.2. Refactorings in the storage layer Red Hat build of Keycloak undergoes a large refactoring to simplify the API usage, which impacts existing code. Some of these changes require updates to existing code. The following sections provide more detail. 6.4.2.1. Changes in the module structure Several public APIs around storage functionality in KeycloakSession have been consolidated, and some have been moved, deprecated, or removed. Three new modules have been introduced, and data-oriented code from server-spi , server-spi-private , and services modules have been moved there: org.keycloak:keycloak-model-legacy Contains all public facing APIs from the legacy store, such as the User Storage API. org.keycloak:keycloak-model-legacy-private Contains private implementations that relate to user storage management, such as storage *Manager classes. org.keycloak:keycloak-model-legacy-services Contains all REST endpoints that directly operate on the legacy store. If you are using for example in your custom user storage provider implementation the classes which have been moved to the new modules, you need to update your dependencies to include the new modules listed above. 6.4.2.2. Changes in KeycloakSession KeycloakSession has been simplified. Several methods have been removed in KeycloakSession . KeycloakSession session contained several methods for obtaining a provider for a particular object type, such as for a UserProvider there are users() , userLocalStorage() , userCache() , userStorageManager() , and userFederatedStorage() . This situation may be confusing for the developer who has to understand the exact meaning of each method. For those reasons, only the users() method is kept in KeycloakSession , and should replace all other calls listed above. The rest of the methods have been removed. The same pattern of depreciation applies to methods of other object areas, such as clients() or groups() . All methods ending in *StorageManager() and *LocalStorage() have been removed. The section describes how to migrate those calls to the new API or use the legacy API. 6.4.3. Migrating existing providers The existing providers need no migration if they do not call a removed method, which should be the case for most providers. If the provider uses removed methods, but does not rely on local versus non-local storage, changing a call from the now removed userLocalStorage() to the method users() is the best option. Be aware that the semantics change here as the new method involves a cache if that has been enabled in the local setup. Before migration: accessing a removed API doesn't compile session .userLocalStorage() ; After migration: accessing the new API when caller does not depend on the legacy storage API session .users() ; In the rare case when a custom provider needs to distinguish between the mode of a particular provider, access to the deprecated objects is provided by using the LegacyStoreManagers data store provider. This might be the case if the provider accesses the local storage directly or wants to skip the cache. This option will be available only if the legacy modules are part of the deployment. Before migration: accessing a removed API session .userLocalStorage() ; After migration: accessing the new functionality via the LegacyStoreManagers API ((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ; Some user storage related APIs have been wrapped in org.keycloak.storage.UserStorageUtil for convenience. 6.4.4. Changes to RealmModel The methods getUserStorageProviders , getUserStorageProvidersStream , getClientStorageProviders , getClientStorageProvidersStream , getRoleStorageProviders and getRoleStorageProvidersStream have been removed. Code which depends on these methods should cast the instance as follows: Before migration: code will not compile due to the changed API realm .getClientStorageProvidersStream() ...; After migration: cast the instance to the legacy interface ((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...; Similarly, code that used to implement the interface RealmModel and wants to provide these methods should implement the new interface LegacyRealmModel . This interface is a sub-interface of RealmModel and includes the old methods: Before migration: code implements the old interface public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ } After migration: code implements the new interface public class MyClass extends LegacyRealmModel { /* ... */ } 6.4.5. Interface UserCache moved to the legacy module As the caching status of objects will be transparent to services, the interface UserCache has been moved to the module keycloak-model-legacy . Code that depends on the legacy implementation should access the UserCache directly. Before migration: code will not compile[source,java,subs="+quotes"] After migration: use the API directly UserStorageUitl.userCache(session); To trigger the invalidation of a realm, instead of using the UserCache API, consider triggering an event: Before migration: code uses cache API[source,java,subs="+quotes"] After migration: use the invalidation API session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId()); 6.4.6. Credential management for users Credentials for users were previously managed using session.userCredentialManager(). method (realm, user, ...) . The new way is to leverage user.credentialManager(). method (...) . This form gets the credential functionality closer to the API of users, and does not rely on prior knowledge of the user credential's location in regard to realm and storage. The old APIs have been removed. Before migration: accessing a removed API session.userCredentialManager() .createCredential (realm, user, credentialModel) After migration: accessing the new API user.credentialManager() .createStoredCredential (credentialModel) For a custom UserStorageProvider , there is a new method credentialManager() that needs to be implemented when returning a UserModel . Those must return an instance of the LegacyUserCredentialManager : Before migration: code will not compile due to the new method credentialManager() required by UserModel public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } } After migration: implementation of the API UserModel.credentialManager() for the legacy store. public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }
|
[
"@Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response;",
"KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse();",
"KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class);",
"@Deprecated List<GroupModel> getGroups(RealmModel realm);",
"Stream<GroupModel> getGroupsStream(RealmModel realm);",
"@Deprecated UserModel getUserById(String id, RealmModel realm);",
"UserModel getUserById(RealmModel realm, String id)",
"session .userLocalStorage() ;",
"session .users() ;",
"session .userLocalStorage() ;",
"((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ;",
"realm .getClientStorageProvidersStream() ...;",
"((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...;",
"public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ }",
"public class MyClass extends LegacyRealmModel { /* ... */ }",
"session**.userCache()**.evict(realm, user);",
"UserStorageUitl.userCache(session);",
"UserCache cache = session.getProvider(UserCache.class); if (cache != null) cache.evict(realm)();",
"session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId());",
"session.userCredentialManager() .createCredential (realm, user, credentialModel)",
"user.credentialManager() .createStoredCredential (credentialModel)",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } }",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/migrating-providers
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/product_guide/making-open-source-more-inclusive
|
Chapter 1. Release notes
|
Chapter 1. Release notes 1.1. Logging 5.9 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Dedicated. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.1.1. Logging 5.9.7 This release includes OpenShift Logging Bug Fix Release 5.9.7 . 1.1.1.1. Bug fixes Before this update, the clusterlogforwarder.spec.outputs.http.timeout parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, the clusterlogforwarder.spec.outputs.http.timeout parameter is now correctly applied, ensuring Fluentd honors the specified timeout and handles HTTP connections according to the user's configuration. ( LOG-6125 ) Before this update, the TLS section was added without verifying the broker URL schema, resulting in SSL connection errors if the URLs did not start with tls . With this update, the TLS section is now added only if the broker URLs start with tls , preventing SSL connection errors. ( LOG-6041 ) 1.1.1.2. CVEs CVE-2024-6104 CVE-2024-6119 CVE-2024-34397 CVE-2024-45296 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-45801 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.2. Logging 5.9.6 This release includes OpenShift Logging Bug Fix Release 5.9.6 . 1.1.2.1. Bug fixes Before this update, the collector deployment ignored secret changes, causing receivers to reject logs. With this update, the system rolls out a new pod when there is a change in the secret value, ensuring that the collector reloads the updated secrets. ( LOG-5525 ) Before this update, the Vector could not correctly parse field values that included a single dollar sign ( USD ). With this update, field values with a single dollar sign are automatically changed to two dollar signs ( USDUSD ), ensuring proper parsing by the Vector. ( LOG-5602 ) Before this update, the drop filter could not handle non-string values (e.g., .responseStatus.code: 403 ). With this update, the drop filter now works properly with these values. ( LOG-5815 ) Before this update, the collector used the default settings to collect audit logs, without handling the backload from output receivers. With this update, the process for collecting audit logs has been improved to better manage file handling and log reading efficiency. ( LOG-5866 ) Before this update, the must-gather tool failed on clusters with non-AMD64 architectures such as Azure Resource Manager (ARM) or PowerPC. With this update, the tool now detects the cluster architecture at runtime and uses architecture-independent paths and dependencies. The detection allows must-gather to run smoothly on platforms like ARM and PowerPC. ( LOG-5997 ) Before this update, the log level was set using a mix of structured and unstructured keywords that were unclear. With this update, the log level follows a clear, documented order, starting with structured keywords. ( LOG-6016 ) Before this update, multiple unnamed pipelines writing to the default output in the ClusterLogForwarder caused a validation error due to duplicate auto-generated names. With this update, the pipeline names are now generated without duplicates. ( LOG-6033 ) Before this update, the collector pods did not have the PreferredScheduling annotation. With this update, the PreferredScheduling annotation is added to the collector daemonset. ( LOG-6023 ) 1.1.2.2. CVEs CVE-2024-0286 CVE-2024-2398 CVE-2024-37370 CVE-2024-37371 1.1.3. Logging 5.9.5 This release includes OpenShift Logging Bug Fix Release 5.9.5 1.1.3.1. Bug Fixes Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. ( LOG-5855 ) Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. ( LOG-5895 ) Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5945 ) 1.1.3.2. CVEs None. 1.1.4. Logging 5.9.4 This release includes OpenShift Logging Bug Fix Release 5.9.4 1.1.4.1. Bug Fixes Before this update, an incorrectly formatted timeout configuration caused the OCP plugin to crash. With this update, a validation prevents the crash and informs the user about the incorrect configuration. ( LOG-5373 ) Before this update, workloads with labels containing - caused an error in the collector when normalizing log entries. With this update, the configuration change ensures the collector uses the correct syntax. ( LOG-5524 ) Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. ( LOG-5697 ) Before this update, the Loki Operator would crash if the CredentialRequest specification was registered in an environment without the cloud-credentials-operator . With this update, the CredentialRequest specification only registers in environments that are cloud-credentials-operator enabled. ( LOG-5701 ) Before this update, the Logging Operator watched and processed all config maps across the cluster. With this update, the dashboard controller only watches the config map for the logging dashboard. ( LOG-5702 ) Before this update, the ClusterLogForwarder introduced an extra space in the message payload which did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. ( LOG-5707 ) Before this update, removing the seeding for grafana-dashboard-cluster-logging as a part of ( LOG-5308 ) broke new greenfield deployments without dashboards. With this update, the Logging Operator seeds the dashboard at the beginning and continues to update it for changes. ( LOG-5747 ) Before this update, LokiStack was missing a route for the Volume API causing the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5749 ) 1.1.4.2. CVEs CVE-2024-24790 1.1.5. Logging 5.9.3 This release includes OpenShift Logging Bug Fix Release 5.9.3 1.1.5.1. Bug Fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5614 ) Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. ( LOG-5586 ) 1.1.5.2. CVEs CVE-2024-2961 CVE-2024-28182 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.1.6. Logging 5.9.2 This release includes OpenShift Logging Bug Fix Release 5.9.2 1.1.6.1. Bug Fixes Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-4910 ) Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. ( LOG-5156 ) Before this update, the Logging Operator did not monitor changes to the grafana-dashboard-cluster-logging config map. With this update, the Logging Operator monitors changes in the ConfigMap objects, ensuring the system stays synchronized and responds effectively to config map modifications. ( LOG-5308 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5426 ) Before this change, the Fluentd out_http plugin ignored the no_proxy environment variable. With this update, the Fluentd patches the HTTP#start method of ruby to honor the no_proxy environment variable. ( LOG-5466 ) 1.1.6.2. CVEs CVE-2022-48554 CVE-2023-2975 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-6129 CVE-2023-6237 CVE-2023-7008 CVE-2023-45288 CVE-2024-0727 CVE-2024-22365 CVE-2024-25062 CVE-2024-28834 CVE-2024-28835 1.1.7. Logging 5.9.1 This release includes OpenShift Logging Bug Fix Release 5.9.1 1.1.7.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5401 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5395 ) 1.1.7.2. Bug Fixes Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5268 ) Before this update, a prune filter without a defined pruneFilterSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5322 ) Before this update, a drop filter without a defined dropTestsSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5323 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5397 ) Before this update, poorly formatted timestamp fields in audit log records led to WARN messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. ( LOG-4672 ) Before this update, the error message thrown while validating a ClusterLogForwarder resource name and namespace did not correspond to the correct error. With this update, the system checks if a ClusterLogForwarder resource with the same name exists in the same namespace. If not, it corresponds to the correct error. ( LOG-5062 ) Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. ( LOG-5307 ) Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. ( LOG-5309 ) 1.1.7.3. CVEs No CVEs. 1.1.8. Logging 5.9.0 This release includes OpenShift Logging Bug Fix Release 5.9.0 1.1.8.1. Removal notice The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 1.1.8.2. Deprecation notice In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Dedicated. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. In Logging 5.9, the Fields option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release. 1.1.8.3. Enhancements 1.1.8.3.1. Log Collection This enhancement adds the ability to refine the process of log collection by using a workload's metadata to drop or prune logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such as kube api or ovn logs, to only collect individual sources. ( LOG-2155 ) This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. ( LOG-3527 ) With this update, the ClusterLogForwarder API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. ( LOG-4605 ) This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the ClusterLogForwarder custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate the ClusterLogForwarder CR with the logging.openshift.io/dev-preview-enable-collector-as-deployment annotation. ( LOG-4779 ) This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. ( LOG-4843 ) With this update, the ClusterLogForwarder CR that specifies an application input with one or more infrastructure namespaces like default , openshift* , or kube* , now requires a service account with the collect-infrastructure-logs role. ( LOG-4943 ) This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the AtLeastOnce option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. ( LOG-5026 ) This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. ( LOG-5055 ) 1.1.8.3.2. Log Storage This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. ( LOG-4538 ) This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled OpenShift Dedicated 4.14 and later clusters. Local storage requires the addition of a CredentialMode: static annotation under spec.storage.secret in the LokiStack CR. ( LOG-4540 ) With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. ( LOG-4571 ) With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. ( LOG-4754 ) 1.1.8.4. Bug Fixes Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new oc client is available in cluster-logging-rhel9-operator , and must-gather works properly on FIPS clusters. ( LOG-4403 ) Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. ( LOG-4709 ) Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. ( LOG-4792 ) Before this update, when the ClusterLogForwarder CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception when ClusterLogging.Spec.Collection was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. ( LOG-5006 ) Before this update, in specific corner cases, replacing the ClusterLogForwarder CR status field caused the resourceVersion to constantly update due to changing timestamps in Status conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. ( LOG-5007 ) Before this update, there was an internal buffering behavior to drop_newest to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. ( LOG-5123 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5165 ) Before this update, the configuration of the Loki Operator ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5212 ) 1.1.8.5. Known Issues None. 1.1.8.6. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-46218 CVE-2024-0553 CVE-2023-0567
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/release-notes
|
Chapter 24. Configuring Routes
|
Chapter 24. Configuring Routes 24.1. Route configuration 24.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift Verification To verify that the route resource that you created, run the following command: USD oc get routes -o yaml <name of resource> 1 1 In this example, the route is named hello-openshift . Sample YAML definition of the created unsecured route: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 <Ingress_Domain> is the default ingress domain name. The ingresses.config/cluster object is created during the installation and cannot be changed. If you want to specify a different domain, you can specify an alternative cluster domain using the appsDomain option. 2 targetPort is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 24.1.2. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . 24.1.3. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 24.1.4. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 24.1.4.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1 includeSubDomains;preload" 1 In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours. Note In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command. Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. 24.1.4.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the following command: USD oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 24.1.4.3. Enforcing HTTP Strict Transport Security per-domain To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy. If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation. Note To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates. Note You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations. Important HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure Edit the Ingress config file: USD oc edit ingresses.config.openshift.io/cluster Example HSTS policy apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains 1 Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies. 2 7 Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns . 3 Optional. If you include namespaceSelector , it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated. 4 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced. The largestMaxAge value must be between 0 and 2147483647 . It can be left unspecified, which means no upper limit is enforced. The smallestMaxAge value must be between 0 and 2147483647 . Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced. 5 Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following: RequirePreload : preload is required by the RequiredHSTSPolicy . RequireNoPreload : preload is forbidden by the RequiredHSTSPolicy . NoOpinion : preload does not matter to the RequiredHSTSPolicy . 6 Optional. includeSubDomainsPolicy can be set with one of the following: RequireIncludeSubDomains : includeSubDomains is required by the RequiredHSTSPolicy . RequireNoIncludeSubDomains : includeSubDomains is forbidden by the RequiredHSTSPolicy . NoOpinion : includeSubDomains does not matter to the RequiredHSTSPolicy . You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command . To apply HSTS to all routes in the cluster, enter the oc annotate command . For example: USD oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" To apply HSTS to all routes in a particular namespace, enter the oc annotate command . For example: USD oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" Verification You can review the HSTS policy you configured. For example: To review the maxAge set for required HSTS policies, enter the following command: USD oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}' To review the HSTS annotations on all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains 24.1.5. Throughput issue troubleshooting methods Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services. If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf , to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes. For information on installing and using iperf , see this Red Hat Solution . In some cases, the cluster may mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action. If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod. Additional resources Latency spikes or temporary reduction in throughput to remote workers Ingress Controller configuration parameters 24.1.6. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 24.1.6.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 24.1.7. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. However, this depends on the router implementation. The following table shows example routes and their accessibility: Table 24.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 24.1.8. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 24.2. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/log-send-hostname Sets the hostname field in the Syslog header. Uses the hostname of the system. log-send-hostname is enabled by default if any Ingress API logging method, such as sidecar or Syslog facility, is enabled for the router. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : cookies are transferred between the visited site and third-party sites. Strict : cookies are restricted to the visited site. None : cookies are restricted to the visited site. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/whitelists folder. Note To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 24.3. rewrite-target examples: Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar 24.1.9. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 24.1.10. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. 3 When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com , to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml 2 The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret. List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend 24.1.11. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 24.1.12. Creating a route using the destination CA certificate in the Ingress annotation The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate. Prerequisites You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Procedure Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1 ... 1 The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route. Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: ... tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- ... 24.1.13. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes. The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services. Prerequisites You deployed an OpenShift Container Platform cluster on bare metal. You installed the OpenShift CLI ( oc ). Procedure To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example: Sample service YAML file apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: "<resource_version_number>" selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {} 1 In a dual-stack instance, there are two different clusterIPs provided. 2 For a single-stack instance, enter IPv4 or IPv6 . For a dual-stack instance, enter both IPv4 and IPv6 . 3 For a single-stack instance, enter SingleStack . For a dual-stack instance, enter RequireDualStack . These resources generate corresponding endpoints . The Ingress Controller now watches endpointslices . To view endpoints , enter the following command: USD oc get endpoints To view endpointslices , enter the following command: USD oc get endpointslices Additional resources Specifying an alternative cluster domain using the appsDomain option 24.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 24.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 24.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 24.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication.
|
[
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/configuring-routes
|
26.7. Miscellaneous Parameters
|
26.7. Miscellaneous Parameters The following parameters can be defined in a parameter file but do not work in a CMS configuration file. askmethod Do not use an automatically detected DVD as installation source but ask for the installation method to manually specify the installation source. This parameter is useful if you booted from an FCP-attached DVD but want to continue with another installation source, for example on the network or on a local hard disk. mediacheck Turns on testing of an ISO-based installation source; for example, when booted from an FCP-attached DVD or using repo= with an ISO on local hard disk or mounted with NFS. nompath Disables support for multipathing devices. proxy=[ protocol ://][ username [: password ]@] host [: port ] Specify a proxy to use with installation over HTTP, HTTPS, or FTP. rescue Boot into a rescue system running from a ramdisk that can be used to fix and restore an installed system. stage2= URL Specifies a path to an install.img file instead of to an installation source. Otherwise, follows the same syntax as repo= . If stage2 is specified, it typically takes precedence over other methods of finding install.img . However, if anaconda finds install.img on local media, the stage2 URL will be ignored. If stage2 is not specified and install.img cannot be found locally, anaconda looks to the location given by repo= or method= . If only stage2= is given without repo= or method= , anaconda uses whatever repos the installed system would have enabled by default for installation. syslog= IP/hostname [: port ] Makes the installer send log messages to a remote syslog server. The boot parameters described here are the most useful for installations and trouble shooting on System z, but only a subset of those that influence the installer. Refer to Chapter 28, Boot Options for a more complete list of installer boot parameters.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-Miscellaneous_parameters
|
Chapter 13. Hardware networks
|
Chapter 13. Hardware networks 13.1. About Single Root I/O Virtualization (SR-IOV) hardware networks The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device with multiple pods. SR-IOV enables you to segment a compliant network device, recognized on the host node as a physical function (PF), into multiple virtual functions (VFs). The VF is used like any other network device. The SR-IOV device driver for the device determines how the VF is exposed in the container: netdevice driver: A regular kernel network device in the netns of the container vfio-pci driver: A character device mounted in the container You can use SR-IOV network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You can enable SR-IOV on a node by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" 13.1.1. Components that manage SR-IOV network devices The SR-IOV Network Operator creates and manages the components of the SR-IOV stack. It performs the following functions: Orchestrates discovery and management of SR-IOV network devices Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI) Creates and updates the configuration of the SR-IOV network device plug-in Creates node specific SriovNetworkNodeState custom resources Updates the spec.interfaces field in each SriovNetworkNodeState custom resource The Operator provisions the following components: SR-IOV network configuration daemon A DaemonSet that is deployed on worker nodes when the SR-IOV Operator starts. The daemon is responsible for discovering and initializing SR-IOV network devices in the cluster. SR-IOV Operator webhook A dynamic admission controller webhook that validates the Operator custom resource and sets appropriate default values for unset fields. SR-IOV Network resources injector A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the resource field to only the first container in a pod automatically. SR-IOV network device plug-in A device plug-in that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plug-ins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plug-ins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources. SR-IOV CNI plug-in A CNI plug-in that attaches VF interfaces allocated from the SR-IOV device plug-in directly into a pod. SR-IOV InfiniBand CNI plug-in A CNI plug-in that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV device plug-in directly into a pod. Note The SR-IOV Network resources injector and SR-IOV Network Operator webhook are enabled by default and can be disabled by editing the default SriovOperatorConfig CR. 13.1.1.1. Supported platforms The SR-IOV Network Operator is supported on the following platforms: Bare metal Red Hat OpenStack Platform (RHOSP) 13.1.1.2. Supported devices OpenShift Container Platform supports the following network interface controllers: Table 13.1. Supported network interface controllers Manufacturer Model Vendor ID Device ID Intel X710 8086 1572 Intel XXV710 8086 158b Mellanox MT27700 Family [ConnectX‐4] 15b3 1013 Mellanox MT27710 Family [ConnectX‐4 Lx] 15b3 1015 Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28908 Family [ConnectX‐6] 15b3 101b 13.1.1.3. Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node. Important Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically. 13.1.1.3.1. Example SriovNetworkNodeState object The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator: An SriovNetworkNodeState object apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded 1 The value of the name field is the same as the name of the worker node. 2 The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. 13.1.1.4. Example use of a virtual function in a pod You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached. This example shows a pod using a virtual function (VF) in RDMA mode: Pod spec that uses RDMA mode apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] command: ["sleep", "infinity"] The following example shows a pod with a VF in DPDK mode: Pod spec that uses DPDK mode apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 13.1.1.5. DPDK library for use with container applications An optional library , app-netutil , provides several API methods for gathering network information about a pod from within a container running within that pod. This library is intended to assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API. Currently there are three API methods implemented: GetCPUInfo() This function determines which CPUs are available to the container and returns the list to the caller. GetHugepages() This function determines the amount of hugepage memory requested in the Pod spec for each container and returns the values to the caller. Note Exposing hugepages via Kubernetes Downward API is an alpha feature in Kubernetes 1.20 and is not enabled in OpenShift Container Platform. The API can be tested by enabling the feature gate, FEATURE_GATES="DownwardAPIHugePages=true" on Kubernetes 1.20 or greater. GetInterfaces() This function determines the set of interfaces in the container and returns the list, along with the interface type and type specific data. There is also a sample Docker image, dpdk-app-centos , which can run one of the following DPDK sample applications based on an environmental variable in the pod-spec: l2fwd , l3wd or testpmd . This Docker image provides an example of integrating the app-netutil into the container image itself. The library can also integrate into an init-container which collects the required data and passes the data to an existing DPDK workload. 13.1.2. steps Installing the SR-IOV Network Operator Optional: Configuring the SR-IOV Network Operator Configuring an SR-IOV network device If you use OpenShift Virtualization: Configuring an SR-IOV network device for virtual machines Configuring an SR-IOV network attachment Adding a pod to an SR-IOV additional network 13.2. Installing the SR-IOV Network Operator You can install the Single Root I/O Virtualization (SR-IOV) Network Operator on your cluster to manage SR-IOV network devices and network attachments. 13.2.1. Installing SR-IOV Network Operator As a cluster administrator, you can install the SR-IOV Network Operator by using the OpenShift Container Platform CLI or the web console. 13.2.1.1. CLI: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure To create the openshift-sriov-network-operator namespace, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator EOF To create an OperatorGroup CR, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF Subscribe to the SR-IOV Network Operator. Run the following command to get the OpenShift Container Platform major and minor version. It is required for the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) To create a Subscription CR for the SR-IOV Network Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "USD{OC_VERSION}" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-network-operator.4.4.0-202006160135 Succeeded 13.2.1.2. Web console: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the web console. Note You must create the operator group by using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure Create a namespace for the SR-IOV Network Operator: In the OpenShift Container Platform web console, click Administration Namespaces . Click Create Namespace . In the Name field, enter openshift-sriov-network-operator , and then click Create . Install the SR-IOV Network Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select SR-IOV Network Operator from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster , select openshift-sriov-network-operator . Click Install . Verify that the SR-IOV Network Operator is installed successfully: Navigate to the Operators Installed Operators page. Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the operator does not appear as installed, to troubleshoot further: Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-sriov-network-operator project. 13.2.2. steps Optional: Configuring the SR-IOV Network Operator 13.3. Configuring the SR-IOV Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. 13.3.1. Configuring the SR-IOV Network Operator Important Modifying the SR-IOV Network Operator configuration is not normally necessary. The default configuration is recommended for most use cases. Complete the steps to modify the relevant configuration only if the default behavior of the Operator is not compatible with your use case. The SR-IOV Network Operator adds the SriovOperatorConfig.sriovnetwork.openshift.io CustomResourceDefinition resource. The operator automatically creates a SriovOperatorConfig custom resource (CR) named default in the openshift-sriov-network-operator namespace. Note The default CR contains the SR-IOV Network Operator configuration for your cluster. To change the operator configuration, you must modify this CR. The SriovOperatorConfig object provides several fields for configuring the operator: enableInjector allows project administrators to enable or disable the Network Resources Injector daemon set. enableOperatorWebhook allows project administrators to enable or disable the Operator Admission Controller webhook daemon set. configDaemonNodeSelector allows project administrators to schedule the SR-IOV Network Config Daemon on selected nodes. 13.3.1.1. About the Network Resources Injector The Network Resources Injector is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Mutation of resource requests and limits in Pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. Mutation of Pod specifications with downward API volume to expose pod annotations and labels to the running container as files under the /etc/podnetinfo path. By default the Network Resources Injector is enabled by the SR-IOV operator and runs as a daemon set on all control plane nodes (also known as the master nodes). The following is an example of Network Resources Injector pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m 13.3.1.2. About the SR-IOV Operator admission controller webhook The SR-IOV Operator Admission Controller webhook is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Validation of the SriovNetworkNodePolicy CR when it is created or updated. Mutation of the SriovNetworkNodePolicy CR by setting the default value for the priority and deviceType fields when the CR is created or updated. By default the SR-IOV Operator Admission Controller webhook is enabled by the operator and runs as a daemon set on all control plane nodes. The following is an example of the Operator Admission Controller webhook pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m 13.3.1.3. About custom node selectors The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. 13.3.1.4. Disabling or enabling the Network Resources Injector To disable or enable the Network Resources Injector, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Operator. Procedure Set the enableInjector field. Replace <value> with false to disable the feature or true to enable the feature. USD oc patch sriovoperatorconfig default \ --type=merge -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableInjector": <value> } }' 13.3.1.5. Disabling or enabling the SR-IOV Operator admission controller webhook To disable or enable the admission controller webhook, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Operator. Procedure Set the enableOperatorWebhook field. Replace <value> with false to disable the feature or true to enable it: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableOperatorWebhook": <value> } }' 13.3.1.6. Configuring a custom NodeSelector for the SR-IOV Network Config daemon The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. To specify the nodes where the SR-IOV Network Config daemon is deployed, complete the following procedure. Important When you update the configDaemonNodeSelector field, the SR-IOV Network Config daemon is recreated on each selected node. While the daemon is recreated, cluster users are unable to apply any new SR-IOV Network node policy or create new SR-IOV pods. Procedure To update the node selector for the operator, enter the following command: USD oc patch sriovoperatorconfig default --type=json \ -n openshift-sriov-network-operator \ --patch '[{ "op": "replace", "path": "/spec/configDaemonNodeSelector", "value": {<node-label>} }]' Replace <node-label> with a label to apply as in the following example: "node-role.kubernetes.io/worker": "" . 13.3.2. steps Configuring an SR-IOV network device 13.4. Configuring an SR-IOV network device You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. 13.4.1. SR-IOV network node configuration object You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the sriovnetwork.openshift.io API group. The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", ...] 12 netFilter: "<filter_string>" 13 deviceType: <device_type> 14 isRdma: false 15 linkType: <link_type> 16 1 The name for the custom resource object. 2 The namespace where the SR-IOV Operator is installed. 3 The resource name of the SR-IOV device plug-in. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 Optional: The maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different network interface controller (NIC) models. 7 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128 . 8 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 9 Optional: The vendor hexadecimal code of the SR-IOV network device. The only allowed values are 8086 and 15b3 . 10 Optional: The device hexadecimal code of the SR-IOV network device. The only allowed values are 158b , 1015 , and 1017 . 11 Optional: An array of one or more physical function (PF) names for the device. 12 Optional: An array of one or more PCI bus addresses for the PF of the device. Provide the address in the following format: 0000:02:00.1 . 13 Optional: The platform-specific network filter. The only supported platform is Red Hat OpenStack Platform (RHOSP). Acceptable values use the following format: openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx . Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with the value from the /var/config/openstack/latest/network_data.json metadata file. 14 Optional: The driver type for the virtual functions. The only allowed values are netdevice and vfio-pci . The default value is netdevice . For a Mellanox NIC to work in Data Plane Development Kit (DPDK) mode on bare metal nodes, use the netdevice driver type and set isRdma to true . 15 Optional: Whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRDMA parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. 16 Optional: The link type for the VFs. You can specify one of the following values: eth or ib . Specify eth for Ethernet or ib for InfiniBand. The default value is eth . When linkType is set to ib , isRdma is automatically set to true by the SR-IOV Network Operator webhook. When linkType is set to ib , deviceType should not be set to vfio-pci . 13.4.1.1. SR-IOV network node configuration examples The following example describes the configuration for an InfiniBand device: Example configuration for an InfiniBand device apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "15b3" deviceID: "101b" rootDevices: - "0000:19:00.0" linkType: ib isRdma: true The following example describes the configuration for an SR-IOV network device in a RHOSP virtual machine: Example configuration for an SR-IOV device in a virtual machine apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 1 1 nicSelector: vendor: "15b3" deviceID: "101b" netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" 2 1 The numVfs field is always set to 1 when configuring the node network policy for a virtual machine. 2 The netFilter field must refer to a network ID when the virtual machine is deployed on RHOSP. Valid values for netFilter are available from an SriovNetworkNodeState object. 13.4.1.2. Virtual function (VF) partitioning for SR-IOV devices In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf> . For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7 : pfNames: ["netpf0#2-7"] netpf0 is the PF interface name. 2 is the first VF index (0-based) that is included in the range. 7 is the last VF index (0-based) that is included in the range. You can select VFs from the same PF by using different policy CRs if the following requirements are met: The numVfs value must be identical for policies that select the same PF. The VF index must be in the range of 0 to <numVfs>-1 . For example, if you have a policy with numVfs set to 8 , then the <first_vf> value must not be smaller than 0 , and the <last_vf> must not be larger than 7 . The VFs ranges in different policies must not overlap. The <first_vf> must not be larger than the <last_vf> . The following example illustrates NIC partitioning for an SR-IOV device. The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver. Policy policy-net-1 : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#0-0"] deviceType: netdevice Policy policy-net-1-dpdk : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#8-15"] deviceType: vfio-pci 13.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Additional resources Understanding how to update labels on nodes . 13.4.3. Troubleshooting SR-IOV configuration After following the procedure to configure an SR-IOV network device, the following sections address some error conditions. To display the state of nodes, run the following command: USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> where: <node_name> specifies the name of a node with an SR-IOV network device. Error output: Cannot allocate memory "lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory" When a node indicates that it cannot allocate memory, check the following items: Confirm that global SR-IOV settings are enabled in the BIOS for the node. Confirm that VT-d is enabled in the BIOS for the node. 13.4.4. Assigning an SR-IOV network to a VRF Important CNI VRF plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plug-in. To do this, add the VRF configuration to the optional metaPlugins parameter of the SriovNetwork resource. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. 13.4.4.1. Creating an additional SR-IOV network attachment with the CNI VRF plug-in The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional SR-IOV network attachment with the CNI VRF plug-in, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } vlan: 0 resourceName: intelnics metaPlugins : | { "type": "vrf", 1 "vrfname": "example-vrf-name" 2 } 1 type must be set to vrf . 2 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command. USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-sriov-network-1 . Example output NAME AGE additional-sriov-network-1 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the VRF CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create an SR-IOV network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the SR-IOV additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output ... 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode ... 13.4.5. steps Configuring an SR-IOV network attachment 13.5. Configuring an SR-IOV Ethernet network attachment You can configure an Ethernet network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 13.5.1. Ethernet device configuration object You can configure an Ethernet network device by defining an SriovNetwork object. The following YAML describes an SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: "<trust_vf>" 12 capabilities: <capabilities> 13 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: A Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: The spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the object is rejected by the SR-IOV Network Operator. 7 A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition. 8 Optional: The link state of virtual function (VF). Allowed value are enable , disable and auto . 9 Optional: A maximum transmission rate, in Mbps, for the VF. 10 Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 11 Optional: An IEEE 802.1p priority level for the VF. The default value is 0 . 12 Optional: The trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value that you specify in quotes, or the SR-IOV Network Operator rejects the object. 13 Optional: The capabilities to configure for this additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 13.5.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plug-in provides IP addresses for other CNI plug-ins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plug-in. 13.5.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 13.2. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 13.3. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 13.4. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 13.5. ipam.dns object Field Type Description nameservers array An of array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 13.5.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. The SR-IOV Network Operator does not create a DHCP server deployment; The Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 13.6. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 13.5.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plug-in allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 13.7. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 13.5.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating a SriovNetwork object. When you create a SriovNetwork object, the SR-IOV Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete a SriovNetwork object if it is attached to any pods in the running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 13.5.3. steps Adding a pod to an SR-IOV additional network 13.5.4. Additional resources Configuring an SR-IOV network device 13.6. Configuring an SR-IOV InfiniBand network attachment You can configure an InfiniBand (IB) network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 13.6.1. InfiniBand device configuration object You can configure an InfiniBand (IB) network device by defining an SriovIBNetwork object. The following YAML describes an SriovIBNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovIBNetwork object. Only pods in the target namespace can attach to the network device. 5 Optional: A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition. 6 Optional: The link state of virtual function (VF). Allowed values are enable , disable and auto . 7 Optional: The capabilities to configure for this network. You can specify "{ "ips": true }" to enable IP address support or "{ "infinibandGUID": true }" to enable IB Global Unique Identifier (GUID) support. 13.6.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plug-in provides IP addresses for other CNI plug-ins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plug-in. 13.6.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 13.8. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 13.9. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 13.10. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 13.11. ipam.dns object Field Type Description nameservers array An of array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 13.6.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 13.12. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 13.6.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plug-in allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 13.13. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero ore more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 13.6.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating a SriovIBNetwork object. When you create a SriovIBNetwork object, the SR-IOV Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete a SriovIBNetwork object if it is attached to any pods in the running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovIBNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovIBNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovIBNetwork object. USD oc get net-attach-def -n <namespace> 13.6.3. steps Adding a pod to an SR-IOV additional network 13.6.4. Additional resources Configuring an SR-IOV network device 13.7. Adding a pod to an SR-IOV additional network You can add a pod to an existing Single Root I/O Virtualization (SR-IOV) network. 13.7.1. Runtime configuration for a network attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. 13.7.1.1. Runtime configuration for an Ethernet-based SR-IOV attachment The following JSON describes the runtime configuration options for an Ethernet-based SR-IOV network attachment. [ { "name": "<name>", 1 "mac": "<mac_address>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "net1", "mac": "20:04:0f:f1:88:01", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 13.7.1.2. Runtime configuration for an InfiniBand-based SR-IOV attachment The following JSON describes the runtime configuration options for an InfiniBand-based SR-IOV network attachment. [ { "name": "<network_attachment>", 1 "infiniband-guid": "<guid>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 The InfiniBand GUID for the SR-IOV device. To use this feature, you also must specify { "infinibandGUID": true } in the SriovIBNetwork object. 3 The IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovIBNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "ib1", "infiniband-guid": "c2:11:22:33:44:55:66:77", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 13.7.2. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Note The SR-IOV Network Resource Injector adds the resource field to the first container in a pod automatically. If you are using an Intel network interface controller (NIC) in Data Plane Development Kit (DPDK) mode, only the first container in your pod is configured to access the NIC. Your SR-IOV additional network is configured for DPDK mode if the deviceType is set to vfio-pci in the SriovNetworkNodePolicy object. You can work around this issue by either ensuring that the container that needs access to the NIC is the first container defined in the Pod object or by disabling the Network Resource Injector. For more information, see BZ#1990953 . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Install the SR-IOV Operator. Create either an SriovNetwork object or an SriovIBNetwork object to attach the pod to. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 13.7.3. Creating a non-uniform memory access (NUMA) aligned SR-IOV pod You can create a NUMA aligned SR-IOV pod by restricting SR-IOV and the CPU resources allocated from the same NUMA node with restricted or single-numa-node Topology Manager polices. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information on CPU Manager, see the "Additional resources" section. You have configured the Topology Manager policy to single-numa-node . Note When single-numa-node is unable to satisfy the request, you can configure the Topology Manager policy to restricted . Procedure Create the following SR-IOV pod spec, and then save the YAML in the <name>-sriov-pod.yaml file. Replace <name> with a name for this pod. The following example shows an SR-IOV pod spec: apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: ["sleep", "infinity"] resources: limits: memory: "1Gi" 3 cpu: "2" 4 requests: memory: "1Gi" cpu: "2" 1 Replace <name> with the name of the SR-IOV network attachment definition CR. 2 Replace <image> with the name of the sample-pod image. 3 To create the SR-IOV pod with guaranteed QoS, set memory limits equal to memory requests . 4 To create the SR-IOV pod with guaranteed QoS, set cpu limits equals to cpu requests . Create the sample SR-IOV pod by running the following command: USD oc create -f <filename> 1 1 Replace <filename> with the name of the file you created in the step. Confirm that the sample-pod is configured with guaranteed QoS. USD oc describe pod sample-pod Confirm that the sample-pod is allocated with exclusive CPUs. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus Confirm that the SR-IOV device and CPUs that are allocated for the sample-pod are on the same NUMA node. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 13.7.4. Additional resources Configuring an SR-IOV Ethernet network attachment Configuring an SR-IOV InfiniBand network attachment Using CPU Manager 13.8. Using high performance multicast You can use multicast on your Single Root I/O Virtualization (SR-IOV) hardware network. 13.8.1. High performance multicast The OpenShift SDN default Container Network Interface (CNI) network provider supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: Multicast packages must be sent or received by a pod through the additional SR-IOV interface. The physical network which connects the SR-IOV interfaces decides the multicast routing and topology, which is not controlled by OpenShift Container Platform. 13.8.2. Configuring an SR-IOV interface for multicast The follow procedure creates an example SR-IOV interface for multicast. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Create a SriovNetworkNodePolicy object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0'] Create a SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { "type": "host-local", 2 "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ {"dst": "224.0.0.0/5"}, {"dst": "232.0.0.0/5"} ], "gateway": "10.56.217.1" } resourceName: example 1 2 If you choose to configure DHCP as IPAM, ensure that you provision the following default routes through your DHCP server: 224.0.0.0/5 and 232.0.0.0/5 . This is to override the static multicast route set by the default network provider. Create a pod with multicast application: apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: ["NET_ADMIN"] 1 command: [ "sleep", "infinity"] 1 The NET_ADMIN capability is required only if your application needs to assign the multicast IP address to the SR-IOV interface. Otherwise, it can be omitted. 13.9. Using virtual functions (VFs) with DPDK and RDMA modes You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA). Important The Data Plane Development Kit (DPDK) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 13.9.1. Using a virtual function in DPDK mode with an Intel NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "8086" deviceID: "158b" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci 1 1 Specify the driver type for the virtual functions to vfio-pci . Note Please refer to the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f intel-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: "{}" 1 vlan: <vlan> resourceName: intelnics 1 Specify an empty object "{}" for the ipam CNI plug-in. DPDK works in userspace mode and does not require an IP address. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . Create the SriovNetwork object by running the following command: USD oc create -f intel-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/intelnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetowrk object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount a hugepage volume to the DPDK pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. Create the DPDK pod by running the following command: USD oc create -f intel-dpdk-pod.yaml 13.9.2. Using a virtual function in DPDK mode with a Mellanox NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. The only allowed values for Mellanox cards are 1015 , 1017 . 2 Specify the driver type for the virtual functions to netdevice . Mellanox SR-IOV VF can work in DPDK mode without using the vfio-pci device type. VF device appears as a kernel network interface inside a container. 3 Enable RDMA mode. This is required by Mellanox cards to work in DPDK mode. Note Please refer to Configuring SR-IOV network devices section for detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the mlx-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/mlxnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to the DPDK pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to the DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the DPDK pod by running the following command: USD oc create -f mlx-dpdk-pod.yaml 13.9.3. Using a virtual function in RDMA mode with a Mellanox NIC RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform. Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of SR-IOV network device. The only allowed values for Mellanox cards are 1015 , 1017 . 2 Specify the driver type for the virtual functions to netdevice . 3 Enable RDMA mode. Note Please refer to the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-network.yaml Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: "1Gi" cpu: "4" 5 hugepages-1Gi: "4Gi" 6 requests: memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetowrk object. 2 Specify the RDMA image which includes your application and RDMA library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to RDMA pod under /dev/hugepages . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS. 6 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the RDMA pod by running the following command: USD oc create -f mlx-rdma-pod.yaml 13.10. Uninstalling the SR-IOV Network Operator To uninstall the SR-IOV Network Operator, you must delete any running SR-IOV workloads, uninstall the Operator, and delete the webhooks that the Operator used. 13.10.1. Uninstalling the SR-IOV Network Operator As a cluster administrator, you can uninstall the SR-IOV Network Operator. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have the SR-IOV Network Operator installed. Procedure Delete all SR-IOV custom resources (CRs): USD oc delete sriovnetwork -n openshift-sriov-network-operator --all USD oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all USD oc delete sriovibnetwork -n openshift-sriov-network-operator --all Follow the instructions in the "Deleting Operators from a cluster" section to remove the SR-IOV Network Operator from your cluster. Delete the SR-IOV custom resource definitions that remain in the cluster after the SR-IOV Network Operator is uninstalled: USD oc delete crd sriovibnetworks.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io USD oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io USD oc delete crd sriovnetworks.sriovnetwork.openshift.io USD oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io Delete the SR-IOV webhooks: USD oc delete mutatingwebhookconfigurations network-resources-injector-config USD oc delete MutatingWebhookConfiguration sriov-operator-webhook-config USD oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config Delete the SR-IOV Network Operator namespace: USD oc delete namespace openshift-sriov-network-operator Additional resources Deleting Operators from a cluster
|
[
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded",
"apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"USD{OC_VERSION}\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase sriov-network-operator.4.4.0-202006160135 Succeeded",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'",
"oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node-label>} }]'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", ...] 12 netFilter: \"<filter_string>\" 13 deviceType: <device_type> 14 isRdma: false 15 linkType: <link_type> 16",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2",
"pfNames: [\"netpf0#2-7\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>",
"\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }",
"oc create -f sriov-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE additional-sriov-network-1 14m",
"ip vrf show",
"Name Table ----------------------- red 10",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"",
"oc create -f <filename> 1",
"oc describe pod sample-pod",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example",
"apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1",
"oc create -f intel-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: \"{}\" 1 vlan: <vlan> resourceName: intelnics",
"oc create -f intel-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f intel-dpdk-pod.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-dpdk-pod.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-rdma-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-rdma-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-rdma-pod.yaml",
"oc delete sriovnetwork -n openshift-sriov-network-operator --all",
"oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all",
"oc delete sriovibnetwork -n openshift-sriov-network-operator --all",
"oc delete crd sriovibnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io",
"oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io",
"oc delete crd sriovnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io",
"oc delete mutatingwebhookconfigurations network-resources-injector-config",
"oc delete MutatingWebhookConfiguration sriov-operator-webhook-config",
"oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config",
"oc delete namespace openshift-sriov-network-operator"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/hardware-networks
|
Chapter 5. Accessing an FTP server using Skupper
|
Chapter 5. Accessing an FTP server using Skupper Securely connect to an FTP server on a remote Kubernetes cluster This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview This example shows you how you can use Skupper to connect an FTP client on one Kubernetes cluster to an FTP server on another. It demonstrates use of Skupper with multi-port services such as FTP. It uses FTP in passive mode (which is more typical these days) and a restricted port range that simplifies Skupper configuration. Prerequisites The kubectl command-line tool, version 1.15 or later ( installation guide ) Access to at least one Kubernetes cluster, from any provider you choose Procedure Clone the repo for this example. Install the Skupper command-line tool Set up your namespaces Deploy the FTP server Create your sites Link your sites Expose the FTP server Run the FTP client Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository. Install the Skupper command-line tool This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment. See the Installation for details about installing the CLI. For configured systems, use the following command: Set up your namespaces Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context. Note The login procedure varies by provider. See the documentation for yours: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift Public: Private: Deploy the FTP server In Private, use kubectl apply to deploy the FTP server. Private: Sample output: Create your sites A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace. For each namespace, use skupper init to create a site. This deploys the Skupper router and controller. Then use skupper status to see the outcome. Public: Sample output: Private: Sample output: As you move through the steps below, you can use skupper status at any time to check your progress. Link your sites A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests. Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it. Note The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it. First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites. Public: Sample output: Private: Sample output: If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation. Expose the FTP server In Private, use skupper expose to expose the FTP server on all linked sites. Private: Sample output: Run the FTP client In Public, use kubectl run and the curl image to perform FTP put and get operations. Public: Sample output:
|
[
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"apply -f server",
"kubectl apply -f server deployment.apps/ftp-server created",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose deployment/ftp-server --port 21100 --port 21",
"skupper expose deployment/ftp-server --port 21100 --port 21 deployment ftp-server exposed as ftp-server",
"echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting",
"echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting pod \"ftp-client\" deleted kubectl run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting Hello! pod \"ftp-client\" deleted"
] |
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/accessing_an_ftp_server_using_skupper
|
Tested deployment models
|
Tested deployment models Red Hat Ansible Automation Platform 2.5 Plan your deployment of Ansible Automation Platform Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/index
|
Chapter 22. Hardware Enablement
|
Chapter 22. Hardware Enablement Runtime Instrumentation for IBM System z Support for the Runtime Instrumentation feature is available as a Technology Preview in Red Hat Enterprise Linux 7.2 on IBM System z. Runtime Instrumentation enables advanced analysis and execution for a number of user-space applications available with the IBM zEnterprise EC12 system. LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 are encouraged to provide feedback to Red Hat and LSI. For more information on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/technology-preview-hardware_enablement
|
Chapter 10. Log storage
|
Chapter 10. Log storage 10.1. About log storage You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store. 10.1.1. Log storage types Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as an alternative to Elasticsearch as a log store for the logging. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. 10.1.1.1. About the Elasticsearch log store The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. Elasticsearch organizes the log data from Fluentd into datastores, or indices , then subdivides each index into multiple pieces called shards , which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas , which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage. Note A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects. 10.1.2. Querying log stores You can query Loki by using the LogQL log query language . 10.1.3. Additional resources Loki components documentation Loki Object Storage documentation 10.2. Installing log storage You can use the OpenShift CLI ( oc ) or the OpenShift Container Platform web console to deploy a log store on your OpenShift Container Platform cluster. Note The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.2.1. Deploying a Loki log store You can use the Loki Operator to deploy an internal Loki log store on your OpenShift Container Platform cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR). 10.2.1.1. Deployment Sizing Sizing for Loki follows the format of N<x>. <size> where the value <N> is number of instances and <size> specifies performance capabilities. Note 1x.extra-small is for demo purposes only, and is not supported. Table 10.1. Loki Sizing 1x.extra-small 1x.small 1x.medium Data transfer Demo use only. 500GB/day 2TB/day Queries per second (QPS) Demo use only. 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 3 Total CPU requests 5 vCPUs 36 vCPUs 54 vCPUs Total Memory requests 7.5Gi 63Gi 139Gi Total Disk requests 150Gi 300Gi 450Gi 10.2.1.1.1. Supported API Custom Resource Definitions LokiStack development is ongoing, not all APIs are supported currently supported. CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported in 5.5 RulerConfig rulerconfig.loki.grafana/v1beta1 Technology Preview AlertingRule alertingrule.loki.grafana/v1beta1 Technology Preview RecordingRule recordingrule.loki.grafana/v1beta1 Technology Preview Important Usage of RulerConfig , AlertingRule and RecordingRule custom resource definitions (CRDs). is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.2.1.2. Installing the Loki Operator by using the OpenShift Container Platform web console To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the Operator Hub within the web console. OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. 10.2.1.3. Creating a secret for Loki object storage by using the web console To configure Loki object storage, you must create a secret. You can create a secret by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to Workloads Secrets in the Administrator perspective of the OpenShift Container Platform web console. From the Create drop-down list, select From YAML . Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames , endpoint , and region fields to define the object storage location. AWS is used in the following example: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Additional resources Loki object storage 10.2.1.4. Creating a LokiStack custom resource by using the web console You can create a LokiStack custom resource (CR) by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed the Loki Operator. Procedure Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Select your Loki deployment size. 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 10.2.1.5. Installing Loki Operator by using the CLI To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the OpenShift Container Platform CLI. OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: charsion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 2 spec: channel: stable 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace 1 2 You must specify the openshift-operators-redhat namespace. 3 Specify stable , or stable-5.<y> as the channel. 4 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object: USD oc apply -f <filename>.yaml 10.2.1.6. Creating a secret for Loki object storage by using the CLI To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a secret in the directory that contains your certificate and key files by running the following command: USD oc create secret generic -n openshift-logging <your_secret_name> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password> Note Use generic or opaque secrets for best results. Verification Verify that a secret was created by running the following command: USD oc get secrets Additional resources Loki object storage 10.2.1.7. Creating a LokiStack custom resource by using the CLI You can create a LokiStack custom resource (CR) by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging 1 Supported size options for production instances of Loki are 1x.small and 1x.medium . 2 Enter the name of your log store secret. 3 Enter the type of your log store secret. 4 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using oc get storageclasses . Apply the LokiStack CR: USD oc apply -f <filename>.yaml Verification Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output: USD oc get pods -n openshift-logging Confirm that you see several pods for components of the logging, similar to the following list: Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s 10.2.2. Loki object storage The Loki Operator supports AWS S3 , as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation . Azure , GCS , and Swift are also supported. The recommended nomenclature for Loki storage is logging-loki- <your_storage_provider> . The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider. Table 10.2. Secret type quick reference Storage provider Secret type value AWS s3 Azure azure Google Cloud gcs Minio s3 OpenShift Data Foundation s3 Swift swift 10.2.2.1. AWS storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on AWS. You created an AWS IAM Policy and IAM User . Procedure Create an object storage secret with the name logging-loki-aws by running the following command: USD oc create secret generic logging-loki-aws \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" 10.2.2.2. Azure storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Azure. Procedure Create an object storage secret with the name logging-loki-azure by running the following command: USD oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ 1 --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" 1 Supported environment values are AzureGlobal , AzureChinaCloud , AzureGermanCloud , or AzureUSGovernment . 10.2.2.3. Google Cloud Platform storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a project on Google Cloud Platform (GCP). You created a bucket in the same project. You created a service account in the same project for GCP authentication. Procedure Copy the service account credentials received from GCP into a file called key.json . Create an object storage secret with the name logging-loki-gcs by running the following command: USD oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" 10.2.2.4. Minio storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You have Minio deployed on your cluster. You created a bucket on Minio. Procedure Create an object storage secret with the name logging-loki-minio by running the following command: USD oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" 10.2.2.5. OpenShift Data Foundation storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You deployed OpenShift Data Foundation . You configured your OpenShift Data Foundation cluster for object storage . Procedure Create an ObjectBucketClaim custom resource in the openshift-logging namespace: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io Get bucket properties from the associated ConfigMap object by running the following command: BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') Get bucket access key from the associated secret by running the following command: ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d) Create an object storage secret with the name logging-loki-odf by running the following command: USD oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" 10.2.2.6. Swift storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Swift. Procedure Create an object storage secret with the name logging-loki-swift by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" You can optionally provide project-specific data, region, or both by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" \ --from-literal=project_id="<swift_project_id>" \ --from-literal=project_name="<swift_project_name>" \ --from-literal=project_domain_id="<swift_project_domain_id>" \ --from-literal=project_domain_name="<swift_project_domain_name>" \ --from-literal=region="<swift_region>" 10.2.3. Deploying an Elasticsearch log store You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OpenShift Container Platform cluster. Note The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.2.3.1. Storage considerations for Elasticsearch A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims (PVCs). Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 10.2.3.2. Installing the OpenShift Elasticsearch Operator by using the web console The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. Prerequisites Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.x as the Update channel . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . 10.2.3.3. Installing the OpenShift Elasticsearch Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Elasticsearch Operator. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Elasticsearch is a memory-intensive application. By default, OpenShift Container Platform installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Container Platform nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. You have administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file: apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts. 2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object as a YAML file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-x.y as the channel. See the following note. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM). Note Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable major and minor release. Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable minor release within the major release. Apply the subscription by running the following command: USD oc apply -f <filename>.yaml The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster. Verification Run the following command: USD oc get csv -n --all-namespaces Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-node-lease elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-public elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-system elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded non-destructive-test elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded openshift-apiserver elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded ... 10.2.4. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.3. Configuring the LokiStack log store In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. 10.3.1. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 10.3.2. Enabling stream-based retention with Loki With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules. To enable stream-based retention, create a LokiStack custom resource (CR): Example global stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream. Example per-tenant stream-based retention apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 10.3.3. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 10.3.4. Additional Resources Loki components documentation Loki Query Language (LogQL) documentation Grafana Dashboard documentation Loki Storage Schema documentation 10.4. Configuring the Elasticsearch log store You can use Elasticsearch 6 to store and organize log data. You can make modifications to your log store, including: Storage for your Elasticsearch cluster Shard replication across data nodes in the cluster, from full replication to no replication External access to Elasticsearch data 10.4.1. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.4.2. Forwarding audit logs to the log store By default, OpenShift Logging does not store audit logs in the internal OpenShift Container Platform Elasticsearch log store. You can send audit logs to this log store so, for example, you can view them in Kibana. To send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API. Important The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. Logging does not comply with those regulations. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources About log collection and forwarding 10.4.3. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 10.4.4. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 10.4.5. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit clusterlogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 10.4.6. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 10.4.7. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 10.4.8. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 10.4.9. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 10.4.10. Exposing the log store service as a route By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 10.4.11. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging
|
[
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: charsion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 2 spec: channel: stable 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-node-lease elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-public elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded kube-system elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded non-destructive-test elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded openshift-apiserver elasticsearch-operator.v5.7.1 OpenShift Elasticsearch Operator 5.7.1 elasticsearch-operator.v5.7.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/log-storage
|
Chapter 3. OpenShift Data Foundation operators
|
Chapter 3. OpenShift Data Foundation operators Red Hat OpenShift Data Foundation is comprised of the following three Operator Lifecycle Manager (OLM) operator bundles, deploying four operators which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated: OpenShift Data Foundation odf-operator OpenShift Container Storage ocs-operator rook-ceph-operator Multicloud Object Gateway mcg-operator Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state or approaching that state, with minimal administrator intervention. 3.1. OpenShift Data Foundation operator The odf-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators. The odf-operator has the following primary functions: Enforces the configuration and versioning of the other operators that comprise OpenShift Data Foundation. It does this by using two primary mechanisms: operator dependencies and Subscription management. The odf-operator bundle specifies dependencies on other OLM operators to make sure they are always installed at specific versions. The operator itself manages the Subscriptions for all other operators to make sure the desired versions of those operators are available for installation by the OLM. Provides the OpenShift Data Foundation external plugin for the OpenShift Console. Provides an API to integrate storage solutions with the OpenShift Console. 3.1.1. Components The odf-operator has a dependency on the ocs-operator package. It also manages the Subscription of the mcg-operator . In addition, the odf-operator bundle defines a second Deployment for the OpenShift Data Foundation external plugin for the OpenShift Console. This defines an nginx -based Pod that serves the necessary files to register and integrate OpenShift Data Foundation dashboards directly into the OpenShift Container Platform Console. 3.1.2. Design diagram This diagram illustrates how odf-operator is integrated with the OpenShift Container Platform. Figure 3.1. OpenShift Data Foundation Operator 3.1.3. Responsibilites The odf-operator defines the following CRD: StorageSystem The StorageSystem CRD represents an underlying storage system that provides data storage and services for OpenShift Container Platform. It triggers the operator to ensure the existence of a Subscription for a given Kind of storage system. 3.1.4. Resources The ocs-operator creates the following CRs in response to the spec of a given StorageSystem. Operator Lifecycle Manager Resources Creates a Subscription for the operator which defines and reconciles the given StorageSystem's Kind. 3.1.5. Limitation The odf-operator does not provide any data storage or services itself. It exists as an integration and management layer for other storage systems. 3.1.6. High availability High availability is not a primary requirement for the odf-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.1.7. Relevant config files The odf-operator comes with a ConfigMap of variables that can be used to modify the behavior of the operator. 3.1.8. Relevant log files To get an understanding of the OpenShift Data Foundation and troubleshoot issues, you can look at the following: Operator Pod logs StorageSystem status Underlying storage system CRD statuses Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageSystem status and events The StorageSystem CR stores the reconciliation details in the status of the CR and has associated events. The spec of the StorageSystem contains the name, namespace, and Kind of the actual storage system's CRD, which the administrator can use to find further information on the status of the storage system. 3.1.9. Lifecycle The odf-operator is required to be present as long as the OpenShift Data Foundation bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Data Foundation CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. The creation and deletion of StorageSystems is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate application programming interface (API) calls. 3.2. OpenShift Container Storage operator The ocs-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators and serves as a configuration gateway for the features provided by the other operators. It does not directly manage the other operators. The ocs-operator has the following primary functions: Creates Custom Resources (CRs) that trigger the other operators to reconcile against them. Abstracts the Ceph and Multicloud Object Gateway configurations and limits them to known best practices that are validated and supported by Red Hat. Creates and reconciles the resources required to deploy containerized Ceph and NooBaa according to the support policies. 3.2.1. Components The ocs-operator does not have any dependent components. However, the operator has a dependency on the existence of all the custom resource definitions (CRDs) from other operators, which are defined in the ClusterServiceVersion (CSV). 3.2.2. Design diagram This diagram illustrates how OpenShift Container Storage is integrated with the OpenShift Container Platform. Figure 3.2. OpenShift Container Storage Operator 3.2.3. Responsibilities The two ocs-operator CRDs are: OCSInitialization StorageCluster OCSInitialization is a singleton CRD used for encapsulating operations that apply at the operator level. The operator takes care of ensuring that one instance always exists. The CR triggers the following: Performs initialization tasks required for OpenShift Container Storage. If needed, these tasks can be triggered to run again by deleting the OCSInitialization CRD. Ensures that the required Security Context Constraints (SCCs) for OpenShift Container Storage are present. Manages the deployment of the Ceph toolbox Pod, used for performing advanced troubleshooting and recovery operations. The StorageCluster CRD represents the system that provides the full functionality of OpenShift Container Storage. It triggers the operator to ensure the generation and reconciliation of Rook-Ceph and NooBaa CRDs. The ocs-operator algorithmically generates the CephCluster and NooBaa CRDs based on the configuration in the StorageCluster spec. The operator also creates additional CRs, such as CephBlockPools , Routes , and so on. These resources are required for enabling different features of OpenShift Container Storage. Currently, only one StorageCluster CR per OpenShift Container Platform cluster is supported. 3.2.4. Resources The ocs-operator creates the following CRs in response to the spec of the CRDs it defines . The configuration of some of these resources can be overridden, allowing for changes to the generated spec or not creating them altogether. General resources Events Creates various events when required in response to reconciliation. Persistent Volumes (PVs) PVs are not created directly by the operator. However, the operator keeps track of all the PVs created by the Ceph CSI drivers and ensures that the PVs have appropriate annotations for the supported features. Quickstarts Deploys various Quickstart CRs for the OpenShift Container Platform Console. Rook-Ceph resources CephBlockPool Define the default Ceph block pools. CephFilesysPrometheusRulesoute for the Ceph object store. StorageClass Define the default Storage classes. For example, for CephBlockPool and CephFilesystem ). VolumeSnapshotClass Define the default volume snapshot classes for the corresponding storage classes. Multicloud Object Gateway resources NooBaa Define the default Multicloud Object Gateway system. Monitoring resources Metrics Exporter Service Metrics Exporter Service Monitor PrometheusRules 3.2.5. Limitation The ocs-operator neither deploys nor reconciles the other Pods of OpenShift Data Foundation. The ocs-operator CSV defines the top-level components such as operator Deployments and the Operator Lifecycle Manager (OLM) reconciles the specified component. 3.2.6. High availability High availability is not a primary requirement for the ocs-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.2.7. Relevant config files The ocs-operator configuration is entirely specified by the CSV and is not modifiable without a custom build of the CSV. 3.2.8. Relevant log files To get an understanding of the OpenShift Container Storage and troubleshoot issues, you can look at the following: Operator Pod logs StorageCluster status and events OCSInitialization status Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageCluster status and events The StorageCluster CR stores the reconciliation details in the status of the CR and has associated events. Status contains a section of the expected container images. It shows the container images that it expects to be present in the pods from other operators and the images that it currently detects. This helps to determine whether the OpenShift Container Storage upgrade is complete. OCSInitialization status This status shows whether the initialization tasks are completed successfully. 3.2.9. Lifecycle The ocs-operator is required to be present as long as the OpenShift Container Storage bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Container Storage CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. An OCSInitialization CR should always exist. The operator creates one if it does not exist. The creation and deletion of StorageClusters is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate API calls. 3.3. Rook-Ceph operator Rook-Ceph operator is the Rook operator for Ceph in the OpenShift Data Foundation. Rook enables Ceph storage systems to run on the OpenShift Container Platform. The Rook-Ceph operator is a simple container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy. 3.3.1. Components The Rook-Ceph operator manages a number of components as part of the OpenShift Data Foundation deployment. Ceph-CSI Driver The operator creates and updates the CSI driver, including a provisioner for each of the two drivers, RADOS block device (RBD) and Ceph filesystem (CephFS) and a volume plugin daemonset for each of the two drivers. Ceph daemons Mons The monitors (mons) provide the core metadata store for Ceph. OSDs The object storage daemons (OSDs) store the data on underlying devices. Mgr The manager (mgr) collects metrics and provides other internal functions for Ceph. RGW The RADOS Gateway (RGW) provides the S3 endpoint to the object store. MDS The metadata server (MDS) provides CephFS shared volumes. 3.3.2. Design diagram The following image illustrates how Ceph Rook integrates with OpenShift Container Platform. Figure 3.3. Rook-Ceph Operator With Ceph running in the OpenShift Container Platform cluster, OpenShift Container Platform applications can mount block devices and filesystems managed by Rook-Ceph, or can use the S3/Swift API for object storage. 3.3.3. Responsibilities The Rook-Ceph operator is a container that bootstraps and monitors the storage cluster. It performs the following functions: Automates the configuration of storage components Starts, monitors, and manages the Ceph monitor pods and Ceph OSD daemons to provide the RADOS storage cluster Initializes the pods and other artifacts to run the services to manage: CRDs for pools Object stores (S3/Swift) Filesystems Monitors the Ceph mons and OSDs to ensure that the storage remains available and healthy Deploys and manages Ceph mons placement while adjusting the mon configuration based on cluster size Watches the desired state changes requested by the API service and applies the changes Initializes the Ceph-CSI drivers that are needed for consuming the storage Automatically configures the Ceph-CSI driver to mount the storage to pods Rook-Ceph Operator architecture The Rook-Ceph operator image includes all required tools to manage the cluster. There is no change to the data path. However, the operator does not expose all Ceph configurations. Many of the Ceph features like placement groups and crush maps are hidden from the users and are provided with a better user experience in terms of physical resources, pools, volumes, filesystems, and buckets. 3.3.4. Resources Rook-Ceph operator adds owner references to all the resources it creates in the openshift-storage namespace. When the cluster is uninstalled, the owner references ensure that the resources are all cleaned up. This includes OpenShift Container Platform resources such as configmaps , secrets , services , deployments , daemonsets , and so on. The Rook-Ceph operator watches CRs to configure the settings determined by OpenShift Data Foundation, which includes CephCluster , CephObjectStore , CephFilesystem , and CephBlockPool . 3.3.5. Lifecycle Rook-Ceph operator manages the lifecycle of the following pods in the Ceph cluster: Rook operator A single pod that owns the reconcile of the cluster. RBD CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . CephFS CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . Monitors (mons) Three mon pods, each with its own deployment. Stretch clusters Contain five mon pods, one in the arbiter zone and two in each of the other two data zones. Manager (mgr) There is a single mgr pod for the cluster. Stretch clusters There are two mgr pods (starting with OpenShift Data Foundation 4.8), one in each of the two non-arbiter zones. Object storage daemons (OSDs) At least three OSDs are created initially in the cluster. More OSDs are added when the cluster is expanded. Metadata server (MDS) The CephFS metadata server has a single pod. RADOS gateway (RGW) The Ceph RGW daemon has a single pod. 3.4. MCG operator The Multicloud Object Gateway (MCG) operator is an operator for OpenShift Data Foundation along with the OpenShift Data Foundation operator and the Rook-Ceph operator. The MCG operator is available upstream as a standalone operator. The MCG operator performs the following primary functions: Controls and reconciles the Multicloud Object Gateway (MCG) component within OpenShift Data Foundation. Manages new user resources such as object bucket claims, bucket classes, and backing stores. Creates the default out-of-the-box resources. A few configurations and information are passed to the MCG operator through the OpenShift Data Foundation operator. 3.4.1. Components The MCG operator does not have sub-components. However, it consists of a reconcile loop for the different resources that are controlled by it. The MCG operator has a command-line interface (CLI) and is available as a part of OpenShift Data Foundation. It enables the creation, deletion, and querying of various resources. This CLI adds a layer of input sanitation and status validation before the configurations are applied unlike applying a YAML file directly. 3.4.2. Responsibilities and resources The MCG operator reconciles and is responsible for the custom resource definitions (CRDs) and OpenShift Container Platform entities. Backing store Namespace store Bucket class Object bucket claims (OBCs) NooBaa, pod stateful sets CRD Prometheus Rules and Service Monitoring Horizontal pod autoscaler (HPA) Backing store A resource that the customer has connected to the MCG component. This resource provides MCG the ability to save the data of the provisioned buckets on top of it. A default backing store is created as part of the deployment depending on the platform that the OpenShift Container Platform is running on. For example, when OpenShift Container Platform or OpenShift Data Foundation is deployed on Amazon Web Services (AWS), it results in a default backing store which is an AWS::S3 bucket. Similarly, for Microsoft Azure, the default backing store is a blob container and so on. The default backing stores are created using CRDs for the cloud credential operator, which comes with OpenShift Container Platform. There is no limit on the amount of the backing stores that can be added to MCG. The backing stores are used in the bucket class CRD to define the different policies of the bucket. Refer the documentation of the specific OpenShift Data Foundation version to identify the types of services or resources supported as backing stores. Namespace store Resources that are used in namespace buckets. No default is created during deployment. Bucketclass A default or initial policy for a newly provisioned bucket. The following policies are set in a bucketclass: Placement policy Indicates the backing stores to be attached to the bucket and used to write the data of the bucket. This policy is used for data buckets and for cache policies to indicate the local cache placement. There are two modes of placement policy: Spread. Strips the data across the defined backing stores Mirror. Creates a full replica on each backing store Namespace policy A policy for the namespace buckets that defines the resources that are being used for aggregation and the resource used for the write target. Cache Policy This is a policy for the bucket and sets the hub (the source of truth) and the time to live (TTL) for the cache items. A default bucket class is created during deployment and it is set with a placement policy that uses the default backing store. There is no limit to the number of bucket class that can be added. Refer to the documentation of the specific OpenShift Data Foundation version to identify the types of policies that are supported. Object bucket claims (OBCs) CRDs that enable provisioning of S3 buckets. With MCG, OBCs receive an optional bucket class to note the initial configuration of the bucket. If a bucket class is not provided, the default bucket class is used. NooBaa, pod stateful sets CRD An internal CRD that controls the different pods of the NooBaa deployment such as the DB pod, the core pod, and the endpoints. This CRD must not be changed as it is internal. This operator reconciles the following entities: DB pod SCC Role Binding and Service Account to allow SSO single sign-on between OpenShift Container Platform and NooBaa user interfaces Route for S3 access Certificates that are taken and signed by the OpenShift Container Platform and are set on the S3 route Prometheus rules and service monitoring These CRDs set up scraping points for Prometheus and alert rules that are supported by MCG. Horizontal pod autoscaler (HPA) It is Integrated with the MCG endpoints. The endpoint pods scale up and down according to CPU pressure (amount of S3 traffic). 3.4.3. High availability As an operator, the only high availability provided is that the OpenShift Container Platform reschedules a failed pod. 3.4.4. Relevant log files To troubleshoot issues with the NooBaa operator, you can look at the following: Operator pod logs, which are also available through the must-gather. Different CRDs or entities and their statuses that are available through the must-gather. 3.4.5. Lifecycle The MCG operator runs and reconciles after OpenShift Data Foundation is deployed and until it is uninstalled.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators
|
Configuration Guide
|
Configuration Guide Red Hat JBoss Enterprise Application Platform 8.0 Instructions for setting up and maintaining Red Hat JBoss Enterprise Application Platform, including running applications and services. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/index
|
Chapter 4. Creating VolumeReplicationClass resource
|
Chapter 4. Creating VolumeReplicationClass resource The VolumeReplicationClass is used to specify the mirroringMode for each volume to be replicated as well as how often a volume or image is replicated (for example, every 5 minutes) from the local cluster to the remote cluster. Note This resource must be created on the Primary managed cluster and the Secondary managed cluster . Procedure Save the following YAML to filename rbd-volumereplicationclass.yaml . Create the file on both the managed clusters. Example output:
|
[
"apiVersion: replication.storage.openshift.io/v1alpha1 kind: VolumeReplicationClass metadata: name: odf-rbd-volumereplicationclass spec: provisioner: openshift-storage.rbd.csi.ceph.com parameters: mirroringMode: snapshot schedulingInterval: \"5m\" # <-- Must be the same as scheduling interval in the DRPolicy replication.storage.openshift.io/replication-secret-name: rook-csi-rbd-provisioner replication.storage.openshift.io/replication-secret-namespace: openshift-storage",
"oc create -f rbd-volumereplicationclass.yaml",
"volumereplicationclass.replication.storage.openshift.io/odf-rbd-volumereplicationclass created"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/creating-volumereplicationclass-resource_rhodf
|
Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications
|
Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/red_hat_ha_solutions_for_sap_hana_s4hana_and_netweaver_based_sap_applications/index
|
Chapter 10. Using report templates to monitor hosts
|
Chapter 10. Using report templates to monitor hosts You can use report templates to query Satellite data to obtain information about, for example, host status, registered hosts, applicable errata, applied errata, subscription details, and user activity. You can use the report templates that ship with Satellite or write your own custom report templates to suit your requirements. The reporting engine uses the embedded Ruby (ERB) syntax. For more information about writing templates and ERB syntax, see Appendix B, Template writing reference . You can create a template, or clone a template and edit the clone. For help with the template syntax, click a template and click the Help tab. 10.1. Generating host monitoring reports To view the report templates in the Satellite web UI, navigate to Monitor > Reports > Report Templates . To schedule reports, configure a cron job or use the Satellite web UI. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . For example, the following templates are available: Host - Installed Products Use this template for hosts in Simple Content Access (SCA) organizations. It generates a report with installed product information along with other metrics included in Subscription - Entitlement Report except information about subscriptions. Subscription - Entitlement Report Use this template for hosts that are not in SCA organizations. It generates a report with information about subscription entitlements including when they expire. It only outputs information for hosts in organizations that do not use SCA. To the right of the report template that you want to use, click Generate . Optional: To schedule a report, to the right of the Generate at field, click the icon to select the date and time you want to generate the report at. Optional: To send a report to an e-mail address, select the Send report via e-mail checkbox, and in the Deliver to e-mail addresses field, enter the required e-mail address. Optional: Apply search query filters. To view all available results, do not populate the filter field with any values. Click Submit . A CSV file that contains the report is downloaded. If you have selected the Send report via e-mail checkbox, the host monitoring report is sent to your e-mail address. CLI procedure List all available report templates: Generate a report: This command waits until the report fully generates before completing. If you want to generate the report as a background task, you can use the hammer report-template schedule command. Note If you want to generate a subscription entitlement report, you have to use the Days from Now option to specify the latest expiration time of entitlement subscriptions. You can use the no limit value to show all entitlements. Show all entitlements Show all entitlements that are going to expire within 60 days 10.2. Creating a report template In Satellite, you can create a report template and customize the template to suit your requirements. You can import existing report templates and further customize them with snippets and template macros. Report templates use Embedded Ruby (ERB) syntax. To view information about working with ERB syntax and macros, in the Satellite web UI, navigate to Monitor > Reports > Report Templates , and click Create Template , and then click the Help tab. When you create a report template in Satellite, safe mode is enabled by default. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Click Create Template . In the Name field, enter a unique name for your report template. If you want the template to be available to all locations and organizations, select Default . Create the template directly in the template editor or import a template from a text file by clicking Import . For more information about importing templates, see Section 10.5, "Importing report templates" . Optional: In the Audit Comment field, you can add any useful information about this template. Click the Input tab, and in the Name field, enter a name for the input that you can reference in the template in the following format: input('name') . Note that you must save the template before you can reference this input value in the template body. Select whether the input value is mandatory. If the input value is mandatory, select the Required checkbox. From the Value Type list, select the type of input value that the user must input. Optional: If you want to use facts for template input, select the Advanced checkbox. Optional: In the Options field, define the options that the user can select from. If this field remains undefined, the users receive a free-text field in which they can enter the value they want. Optional: In the Default field, enter a value, for example, a host name, that you want to set as the default template input. Optional: In the Description field, you can enter information that you want to display as inline help about the input when you generate the report. Optional: Click the Type tab, and select whether this template is a snippet to be included in other templates. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. Additional resources For more information about safe mode, see Section 10.9, "Report template safe mode" . For more information about writing templates, see Appendix B, Template writing reference . For more information about macros you can use in report templates, see Section B.6, "Template macros" . To view a step by step example of populating a template, see Section 10.8, "Creating a report template to monitor entitlements" . 10.3. Exporting report templates You can export report templates that you create in Satellite. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Locate the template that you want to export, and from the list in the Actions column, select Export . Repeat this action for every report template that you want to download. An .erb file that contains the template downloads. CLI procedure To view the report templates available for export, enter the following command: Note the template ID of the template that you want to export in the output of this command. To export a report template, enter the following command: 10.4. Exporting report templates using the Satellite API You can use the Satellite report_templates API to export report templates from Satellite. For more information about using the Satellite API, see API guide . Procedure Use the following request to retrieve a list of available report templates: Example request: In this example, the json_reformat tool is used to format the JSON output. Example response: Note the id of the template that you want to export, and use the following request to export the template: Example request: Note that 158 is an example ID of the template to export. In this example, the exported template is redirected to host_complete_list.erb . 10.5. Importing report templates You can import a report template into the body of a new template that you want to create. Note that using the Satellite web UI, you can only import templates individually. For bulk actions, use the Satellite API. For more information, see Section 10.6, "Importing report templates using the Satellite API" . Prerequisites You must have exported templates from Satellite to import them to use in new templates. For more information see Section 10.3, "Exporting report templates" . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . In the upper right of the Report Templates window, click Create Template . On the upper right of the Editor tab, click the folder icon, and select the .erb file that you want to import. Edit the template to suit your requirements. Click Submit . For more information about customizing your new template, see Appendix B, Template writing reference . 10.6. Importing report templates using the Satellite API You can use the Satellite API to import report templates into Satellite. Importing report templates using the Satellite API automatically parses the report template metadata and assigns organizations and locations. For more information about using the Satellite API, see the API guide . Prerequisites Create a template using .erb syntax or export a template from another Satellite. For more information about writing templates, see Appendix B, Template writing reference . For more information about exporting templates from Satellite, see Section 10.4, "Exporting report templates using the Satellite API" . Procedure Use the following example to format the template that you want to import to a .json file: Example JSON file with ERB template: Use the following request to import the template: Use the following request to retrieve a list of report templates and validate that you can view the template in Satellite: 10.7. Generating a list of installed packages Use this procedure to generate a list of installed packages in Report Templates . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . To the right of Host - All Installed Packages , click Generate . Optional: Use the Hosts filter search field to search for and apply specific host filters. Click Generate . If the download does not start automatically, click Download . Verification You have the spreadsheet listing the installed packages for the selected hosts downloaded on your machine. 10.8. Creating a report template to monitor entitlements You can use a report template to return a list of hosts with a certain subscription and to display the number of cores for those hosts. For more information about writing templates, see Appendix B, Template writing reference . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Click Create Template . Optional: In the Editor field, use the <%# > tags to add a comment with information that might be useful for later reference. For example: Add a line with the load_hosts() macro and populate the macro with the following method and variables: To view a list of variables you can use, click the Help tab and in the Safe mode methods and variables table, find the Host::Managed row. Add a line with the host.pools variable with the each method, for example: Add a line with the report_row() method to create a report and add the variables that you want to target as part of the report: Add end statements to the template: To generate a report, you must add the <%= report_render -%> macro: Click Submit to save the template. 10.9. Report template safe mode When you create report templates in Satellite, safe mode is enabled by default. Safe mode limits the macros and variables that you can use in the report template. Safe mode prevents rendering problems and enforces best practices in report templates. The list of supported macros and variables is available in the Satellite web UI. To view the macros and variables that are available, in the Satellite web UI, navigate to Monitor > Reports > Report Templates and click Create Template . In the Create Template window, click the Help tab and expand Safe mode methods . While safe mode is enabled, if you try to use a macro or variable that is not listed in Safe mode methods , the template editor displays an error message. To view the status of safe mode in Satellite, in the Satellite web UI, navigate to Administer > Settings and click the Provisioning tab. Locate the Safemode rendering row to check the value.
|
[
"hammer report-template list",
"hammer report-template generate --id My_Template_ID",
"hammer report-template generate --inputs \"Days from Now=no limit\" --name \"Subscription - Entitlement Report\"",
"hammer report-template generate --inputs \"Days from Now=60\" --name \"Subscription - Entitlement Report\"",
"hammer report-template list",
"hammer report-template dump --id My_Template_ID > example_export .erb",
"curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat",
"{ \"total\": 6, \"subtotal\": 6, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applicable errata\", \"id\": 112 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applied Errata\", \"id\": 113 }, { \"created_at\": \"2019-11-30 16:15:24 UTC\", \"updated_at\": \"2019-11-30 16:15:24 UTC\", \"name\": \"Hosts - complete list\", \"id\": 158 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Host statuses\", \"id\": 114 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Registered hosts\", \"id\": 115 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Subscriptions\", \"id\": 116 } ] }",
"curl --insecure --output /tmp/_Example_Export_Template .erb_ --user admin:password --request GET --config https:// satellite.example.com /api/report_templates/ My_Template_ID /export",
"cat Example_Template .json { \"name\": \" Example Template Name \", \"template\": \" Enter ERB Code Here \" }",
"{ \"name\": \"Hosts - complete list\", \"template\": \" <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQDN': host.name ) -%> <% end -%> <%= report_render %> \" }",
"curl --insecure --user admin:redhat --data @ Example_Template .json --header \"Content-Type:application/json\" --request POST --config https:// satellite.example.com /api/report_templates/import",
"curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat",
"<%# name: Entitlements snippet: false model: ReportTemplate require: - plugin: katello version: 3.14.0 -%>",
"<%- load_hosts(includes: [:lifecycle_environment, :operatingsystem, :architecture, :content_view, :organization, :reported_data, :subscription_facet, :pools => [:subscription]]).each_record do |host| -%>",
"<%- host.pools.each do |pool| -%>",
"<%- report_row( 'Name': host.name, 'Organization': host.organization, 'Lifecycle Environment': host.lifecycle_environment, 'Content View': host.content_view, 'Host Collections': host.host_collections, 'Virtual': host.virtual, 'Guest of Host': host.hypervisor_host, 'OS': host.operatingsystem, 'Arch': host.architecture, 'Sockets': host.sockets, 'RAM': host.ram, 'Cores': host.cores, 'SLA': host_sla(host), 'Products': host_products(host), 'Subscription Name': sub_name(pool), 'Subscription Type': pool.type, 'Subscription Quantity': pool.quantity, 'Subscription SKU': sub_sku(pool), 'Subscription Contract': pool.contract_number, 'Subscription Account': pool.account_number, 'Subscription Start': pool.start_date, 'Subscription End': pool.end_date, 'Subscription Guest': registered_through(host) ) -%>",
"<%- end -%> <%- end -%>",
"<%= report_render -%>"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/using_report_templates_to_monitor_hosts_managing-hosts
|
Chapter 2. Determining permission policy and role configuration source
|
Chapter 2. Determining permission policy and role configuration source You can configure Red Hat Developer Hub policy and roles by using different sources. To maintain data consistency, Developer Hub associates each permission policy and role with one unique source. You can only use this source to change the resource. The available sources are: Configuration file Configure roles and policies in the Developer Hub app-config.yaml configuration file, for instance to declare your policy administrators . The Configuration file pertains to the default role:default/rbac_admin role provided by the RBAC plugin. The default role has limited permissions to create, read, update, delete permission policies or roles, and to read catalog entities. Note In case the default permissions are insufficient for your administrative requirements, you can create a custom admin role with the required permission policies. REST API Configure roles and policies by using the Developer Hub Web UI or by using the REST API. CSV file Configure roles and policies by using external CSV files. Legacy The legacy source applies to policies and roles defined before RBAC backend plugin version 2.1.3 , and is the least restrictive among the source location options. Important Replace the permissions and roles using the legacy source with the permissions using the REST API or the CSV file sources. Procedure To determine the source of a role or policy, use a GET request.
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authorization/proc-determining-policy-and-role-source
|
4.6. Configuring a Watchdog
|
4.6. Configuring a Watchdog 4.6.1. Adding a Watchdog Card to a Virtual Machine You can add a watchdog card to a virtual machine to monitor the operating system's responsiveness. Adding Watchdog Cards to Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the watchdog model to use from the Watchdog Model drop-down list. Select an action from the Watchdog Action drop-down list. This is the action that the virtual machine takes when the watchdog is triggered. Click OK . 4.6.2. Installing a Watchdog To activate a watchdog card attached to a virtual machine, you must install the watchdog package on that virtual machine and start the watchdog service. Installing Watchdogs Log in to the virtual machine on which the watchdog card is attached. Install the watchdog package and dependencies: Edit the /etc/watchdog.conf file and uncomment the following line: Save the changes. Start the watchdog service and ensure this service starts on boot: Red Hat Enterprise Linux 6: Red Hat Enterprise Linux 7: 4.6.3. Confirming Watchdog Functionality Confirm that a watchdog card has been attached to a virtual machine and that the watchdog service is active. Warning This procedure is provided for testing the functionality of watchdogs only and must not be run on production machines. Confirming Watchdog Functionality Log in to the virtual machine on which the watchdog card is attached. Confirm that the watchdog card has been identified by the virtual machine: Run one of the following commands to confirm that the watchdog is active: Trigger a kernel panic: Terminate the watchdog service: The watchdog timer can no longer be reset, so the watchdog counter reaches zero after a short period of time. When the watchdog counter reaches zero, the action specified in the Watchdog Action drop-down menu for that virtual machine is performed. 4.6.4. Parameters for Watchdogs in watchdog.conf The following is a list of options for configuring the watchdog service available in the /etc/watchdog.conf file. To configure an option, you must uncomment that option and restart the watchdog service after saving the changes. Note For a more detailed explanation of options for configuring the watchdog service and using the watchdog command, see the watchdog man page. Table 4.2. watchdog.conf variables Variable name Default Value Remarks ping N/A An IP address that the watchdog attempts to ping to verify whether that address is reachable. You can specify multiple IP addresses by adding additional ping lines. interface N/A A network interface that the watchdog will monitor to verify the presence of network traffic. You can specify multiple network interfaces by adding additional interface lines. file /var/log/messages A file on the local system that the watchdog will monitor for changes. You can specify multiple files by adding additional file lines. change 1407 The number of watchdog intervals after which the watchdog checks for changes to files. A change line must be specified on the line directly after each file line, and applies to the file line directly above that change line. max-load-1 24 The maximum average load that the virtual machine can sustain over a one-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. max-load-5 18 The maximum average load that the virtual machine can sustain over a five-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately three quarters that of max-load-1 . max-load-15 12 The maximum average load that the virtual machine can sustain over a fifteen-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately one half that of max-load-1 . min-memory 1 The minimum amount of virtual memory that must remain free on the virtual machine. This value is measured in pages. A value of 0 disables this feature. repair-binary /usr/sbin/repair The path and file name of a binary file on the local system that will be run when the watchdog is triggered. If the specified file resolves the issues preventing the watchdog from resetting the watchdog counter, then the watchdog action is not triggered. test-binary N/A The path and file name of a binary file on the local system that the watchdog will attempt to run during each interval. A test binary allows you to specify a file for running user-defined tests. test-timeout N/A The time limit, in seconds, for which user-defined tests can run. A value of 0 allows user-defined tests to continue for an unlimited duration. temperature-device N/A The path to and name of a device for checking the temperature of the machine on which the watchdog service is running. max-temperature 120 The maximum allowed temperature for the machine on which the watchdog service is running. The machine will be halted if this temperature is reached. Unit conversion is not taken into account, so you must specify a value that matches the watchdog card being used. admin root The email address to which email notifications are sent. interval 10 The interval, in seconds, between updates to the watchdog device. The watchdog device expects an update at least once every minute, and if there are no updates over a one-minute period, then the watchdog is triggered. This one-minute period is hard-coded into the drivers for the watchdog device, and cannot be configured. logtick 1 When verbose logging is enabled for the watchdog service, the watchdog service periodically writes log messages to the local system. The logtick value represents the number of watchdog intervals after which a message is written. realtime yes Specifies whether the watchdog is locked in memory. A value of yes locks the watchdog in memory so that it is not swapped out of memory, while a value of no allows the watchdog to be swapped out of memory. If the watchdog is swapped out of memory and is not swapped back in before the watchdog counter reaches zero, then the watchdog is triggered. priority 1 The schedule priority when the value of realtime is set to yes . pidfile /var/run/syslogd.pid The path and file name of a PID file that the watchdog monitors to see if the corresponding process is still active. If the corresponding process is not active, then the watchdog is triggered.
|
[
"yum install watchdog",
"watchdog-device = /dev/watchdog",
"service watchdog start chkconfig watchdog on",
"systemctl start watchdog.service systemctl enable watchdog.service",
"lspci | grep watchdog -i",
"echo c > /proc/sysrq-trigger",
"kill -9 pgrep watchdog"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-configuring_a_watchdog
|
Chapter 3. Red Hat build of OpenJDK 8.0.362 release notes
|
Chapter 3. Red Hat build of OpenJDK 8.0.362 release notes The latest Red Hat build of OpenJDK 8 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 8 releases. Note For all the other changes and security fixes, see OpenJDK 8u362 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that the Red Hat build of OpenJDK 8.0.362 release provides: Improved CORBA communications By default, the CORBA implementation in Red Hat build of OpenJDK 8.0.362 refuses to deserialize any objects that do not contain the IOR: prefix. If you want to revert to the behavior, you can set the new com.sun.CORBA.ORBAllowDeserializeObject property to true . See JDK-8285021 (JDK Bug System) . Enhanced BMP bounds By default, Red Hat build of OpenJDK 8.0.362 disables loading a linked International Color Consortium (ICC) profile in a BMP image. You can enable this functionality by setting the new sun.imageio.bmp.enabledLinkedProfiles property to true . This property replaces the old sun.imageio.plugins.bmp.disableLinkedProfiles property See JDK-8295687 (JDK Bug System) . Improved banking of sounds Previously, the SoundbankReader implementation, com.sun.media.sound.JARSoundbankReader , downloaded a JAR soundbank from a URL. For Red Hat build of OpenJDK 8.0.362, this behavior is now disabled by default. To re-enable the behavior, set the new system property jdk.sound.jarsoundbank to true . See JDK-8293742 (JDK Bug System) . Red Hat build of OpenJDK support for Microsoft Windows 11 The Red Hat build of OpenJDK 8.0.362 can now recogize the Microsoft Windows 11 operating system, and can set the os.name property to Windows 11 . See JDK-8274840 (JDK Bug System). SHA-1 Signed JARs With the Red Hat build of OpenJDK 8.0.362 release, JARs signed with SHA-1 algorithms are restricted by default and treated as if they were unsigned. These restrictions apply to the following algorithms: Algorithms used to digest, sign, and optionally timestamp the JAR. Signature and digest algorithms of the certificates in the certificate chain of the code signer and the Timestamp Authority, and any Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) responses that are used to verify if those certificates have been revoked. Additionally, the restrictions apply to signed Java Cryptography Extension (JCE) providers. To reduce the compatibility risk for JARs that have been previously timestamped, the restriction does not apply to any JAR signed with SHA-1 algorithms and timestamped prior to January 01, 2019 . This exception might be removed in a future Red Hat build of OpenJDK release. To determine if your JAR file is impacted by the restriction, you can issue the following command in your CLI: From the output of the command, search for instance of SHA1 , SHA-1 , or disabled . Additionally, search for any warning messages that indicate that the JAR will be treated as unsigned. For example: Consider replacing or re-signing any JARs affected by the new restrictions with stronger algorithms. If your JAR file is impacted by this restriction, you can remove the algorithm and re-sign the file with a stronger algorithm, such as SHA-256 . If you want to remove the restriction on SHA-1 signed JARs for Red Hat build of OpenJDK 8.0.362, and you accept the security risks, you can complete the following actions: Modify the java.security configuration file. Alternatively, you can preserve this file and instead create another file with the required configurations. Remove the SHA1 usage SignedJAR & denyAfter 2019 01 011 entry from the jdk.certpath.disabledAlgorithms security property. Remove the SHA1 denyAfter 2019-01-01 entry from the jdk.jar.disabledAlgorithms security property. Note The value of jdk.certpath.disabledAlgorithms in the java.security file might be overridden by the system security policy on RHEL 8 and 9. The values used by the system security policy can be seen in the file /etc/crypto-policies/back-ends/java.config and disabled by either setting security.useSystemPropertiesFile to false in the java.security file or passing -Djava.security.disableSystemPropertiesFile=true to the JVM. These values are not modified by this release, so the values remain the same for releases of Red Hat build of OpenJDK. For an example of configuring the java.security file, see Overriding java.security properties for JBoss EAP for OpenShift (Red Hat Customer Portal). See JDK-8269039 (JDK Bug System).
|
[
"jarsigner -verify -verbose -certs",
"Signed by \"CN=\"Signer\"\" Digest algorithm: SHA-1 (disabled) Signature algorithm: SHA1withRSA (disabled), 2048-bit key WARNING: The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled by the security property: jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024, DSA keySize < 1024, SHA1 denyAfter 2019-01-01"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.362/openjdk-80362-release-notes_openjdk
|
Chapter 6. Configuring virtual machine subscriptions
|
Chapter 6. Configuring virtual machine subscriptions You can use host-based subscriptions for Red Hat Enterprise Linux virtual machines in the following virtualization platforms: Red Hat Virtualization Red Hat Enterprise Linux Virtualization (KVM) (KVM) Red Hat OpenStack Platform VMware vSphere (HyperVBrandName) OpenShift Virtualization 6.1. Host-based subscriptions Virtual machines can use host-based subscriptions instead of consuming entitlements from physical subscriptions. A host-based subscription is attached to a hypervisor and entitles it to provide subscriptions to its virtual machines. Many host-based subscriptions provide entitlements for unlimited virtual machines. To allow virtual machines to inherit subscriptions from their hypervisors, you must install and configure the virt-who daemon. Virt-who queries the virtualization platform and reports hypervisor and virtual machine information to Red Hat Subscription Management. When a virtual machine is registered with auto-attach enabled, and sufficient host-based subscriptions are available, one of the following behaviors occurs: If the virtual machine has been reported by virt-who and a host-based subscription is attached to the hypervisor, the virtual machine inherits a subscription from the hypervisor. If the virtual machine has been reported by virt-who, and the hypervisor is registered to Subscription Management but does not have a host-based subscription attached, a host-based subscription is attached to the hypervisor and inherited by the virtual machine. If the virtual machine, or its hypervisor, has not been reported by virt-who, Subscription Management grants the virtual machine a temporary subscription, valid for up to seven days. After virt-who reports updated information, Subscription Management can determine which hypervisor the virtual machine is running on and attach a permanent subscription to the virtual machine. If auto-attach is enabled, but virt-who is not running or there are no host-based subscriptions available, Subscription Management attaches physical subscriptions to the virtual machines instead, which might consume more entitlements than intended. If auto-attach is not enabled, virtual machines cannot use host-based subscriptions. Note System Purpose add-ons have no effect on the auto-attach feature in Red Hat Enterprise Linux 8.0, 8.1, and 8.2. If you are managing subscriptions in entitlement-based mode, you can use the Customer Portal to check if a subscription requires the virt-who daemon to be enabled. To check if a subscription requires virt-who, log in to the Customer Portal at https://access.redhat.com , navigate to Subscriptions , and select a subscription to view the details. If " Virt-Who: Required" appears in the SKU Details , you must configure virt-who to use that subscription. If you are managing subscriptions with Red Hat Satellite, you can use the Satellite web UI to check if a subscription requires the virt-who daemon to be enabled. To check if a subscription requires virt-who, open the Satellite web UI and navigate to Content > Subscriptions . If the Requires Virt-Who column shows a checkmark for a subscription, you must configure virt-who to use that subscription. Virtual machine subscription process This diagram shows the subscription workflow when a virtual machine has not yet been reported by virt-who: A virtual machine requests a subscription from Subscription Management. Subscription Management grants the virtual machine a temporary subscription, valid for a maximum of seven days, while it determines which hypervisor the virtual machine belongs to. Virt-who connects to the hypervisor or virtualization manager and requests information about its virtual machines. The hypervisor or virtualization manager returns a list of its virtual machines to virt-who, including each UUID. Virt-who reports the list of virtual machines and their hypervisors to Subscription Management. Subscription Management attaches a permanent subscription to the virtual machine, if sufficient entitlements are available. Additional resources For more information about the Red Hat subscription model, see Introduction to Red Hat Subscription Management Workflows . To allow virtual machines to inherit subscriptions from their hypervisors, complete the following steps: Prerequisites Ensure you have active subscriptipns for all of the hypervisors that you plan to use. For Microsoft Hyper-V, create a read-only virt-who user with a non-expiring password on each hypervisor that runs Red Hat Enterprise Linux virtual machines. For VMware vSphere, create a read-only virt-who user with a non-expiring password on the vCenter Server. The virt-who user requires at least read-only access to all objects in the vCenter Data Center. For OpenShift Virtualization, create a Service Account and grant it with an admin role in the OpenShift cluster master, virt-who need the Service Account Token to connect the OpenShift cluster. 6.2. Virt-who configuration for each virtualization platform Virt-who is configured using files that specify details such as the virtualization type and the hypervisor or virtualization manager to query. The supported configuration is different for each virtualization platform. Individual configuration files are stored in the /etc/virt-who.d/ directory. You must create an individual configuration file for each hypervisor or virtualization manager. Example virt-who configuration file This example shows an individual virt-who configuration file for a Microsoft Hyper-V hypervisor: The type and server values depend on the virtualization platform. The following table provides more detail. The username refers to a read-only user on Microsoft Hyper-V or VMware vCenter, which you must create before configuring virt-who. Virt-who uses this account to retrieve the list of virtual machines. You do not need a dedicated virt-who user for Red Hat hypervisors. Required configuration for each virtualization platform Use this table to plan your virt-who configuration: Supported virtualization platform Type specified in the configuration file Server specified in the configuration file Server where virt-who is installed Red Hat Virtualization Red Hat Enterprise Linux Virtualization (KVM) (KVM) Red Hat OpenStack Platform libvirt Not required Each hypervisor VMware vSphere esx vCenter Server A dedicated RHEL server Microsoft Hyper-V hyperv Hypervisor A dedicated RHEL server OpenShift Virtualization kubevirt OpenShiftCluster Master A dedicated Red Hat Enterprise Linux server Important The rhevm and xen hypervisor types are not supported. 6.2.1. Virt-who general configuration Note '/etc/sysconfig/virt-who' will not be supported in the major release, the global configuration file will be replaced by '/etc/virt-who.conf'. (i.e. 'VIRTWHO_DEBUG', 'VIRTWHO_ONE_SHOT', 'VIRTWHO_INTERVAL', 'HTTPS_PROXY, NO_PROXY'). The general configuration file (located at '/etc/virt-who.conf') is created automatically when you install virt-who. You can use the default values or edit this file if required. It has three special sections: '[global]', '[defaults]', and '[system_environment]'. The settings in the global section affect the overall operation of the application. Example: Global section 1 How often to check connected hypervisors for changes (seconds). Also affects how often a mapping is reported. Because the virtual machines are granted temporary subscriptions for up to seven days, frequent queries are not required; you can select an interval that suits the size of your environment. 2 Enable debugging output The settings in the defaults that can be are applied as defaults to the configurations found in ''/etc/virt-who.d/ .conf'. If you enable the options in this section, you don't need to set them in ''/etc/virt-who.d/ .conf' again. Example: Defaults section 1 The organization the hypervisor belongs to. You can find the organization by running subscription-manager orgs on the hypervisor. 2 How will be the hypervisor identified, one of: uuid, hostname, hwuuid The settings in the system_environment are written to the system's environment and are available for the duration of the process execution, it will be used whether virt-who was started as a service or from the command line. Example: system_environment section 1 Use an HTTP proxy for virt-who communication 2 If you do not want to use an HTTP proxy for any virt-who communication from this server, you can set no_proxy to *. Note The section [system_environment] is supported from virt-who-0.30.x-1.el8 (RHEL 8.4). If you are using the old virt-who version, please set 'HTTP_PROXY', 'NO_PROXY' by '/etc/sysconfig/virt-who'. 6.3. Attaching a host-based subscription to hypervisors Use this procedure to attach a host-based subscription, such as Red Hat Enterprise Linux for Virtual Datacenters , to hypervisors that are already registered. To register a new hypervisor, see Using and Configuring Red Hat Subscription Manager . You must register a hypervisor before configuring virt-who to query it. Prerequisites You have active subscriptions for all of the hypervisors that you plan to use. Web UI procedure Log in to the Customer Portal at https://access.redhat.com . Navigate to Subscriptions > Systems and click the name of the hypervisor. Click the Subscriptions tab. Click Attach Subscriptions . Select the host-based subscription, then click Attach Subscriptions . Repeat these steps for each hypervisor. CLI procedure On the hypervisor, identify and make a note of your host-based subscription's Pool ID: Attach the host-based subscription to the hypervisor: Verify that the host-based subscription is attached: Repeat these steps for each hypervisor. 6.4. Preparing a virt-who host Use this procedure to configure a Red Hat Enterprise Linux 7 server to run the virt-who service for VMware vCenter and Microsoft Hyper-V. The server can be physical or virtual. You do not need a separate virt-who host for Red Hat hypervisors. Procedure Install a Red Hat Enterprise Linux 7 server. Only a CLI environment is required. For more information, see the Red Hat Enterprise Linux 7 Installation Guide . Register the server: Open a network port for communication between virt-who and the subscription service: Open a network port for communication between virt-who and each hypervisor or virtualization manager: VMware vCenter: TCP port 443 Microsoft Hyper-V: TCP port 5985 Install virt-who: Optional: Edit the /etc/virt-who.conf file to change or add global settings. These settings apply to all virt-who connections from this server. Change the value of VIRTWHO_INTERVAL to specify how often, in minutes, virt-who queries the virtualization platform. Because the virtual machines are granted temporary subscriptions for up to seven days, frequent queries are not required; you can select an interval that suits the size of your environment. Once a day ( 1440 ) is suitable for most environments. If you want to use an HTTP proxy for virt-who communication, add a line specifying the proxy: If you do not want to use an HTTP proxy for any virt-who communication from this server, add the following line: Start and enable the virt-who service: 6.5. Configuring virt-who Important The use of environment variables and the use of the sysconfig file to configure virt-who are deprecated. Their use will be ignored in the major release. The supported virt-who configuration is different for each virtualization platform: To configure virt-who for Red Hat products, see Installing and configuring virt-who on Red Hat hypervisors . To configure virt-who for VMware vCenter, see Configuring virt-who to connect to VMware vCenter . To configure virt-who for Microsoft Hyper-V, see Configuring virt-who to connect to Microsoft-Hyper-V . To configure virt-who for OpenShift Virtualization, see Configuring virt-who to connect to OpenShift Virtualization . 6.5.1. Installing and configuring virt-who on Red Hat hypervisors Use this procedure to install and configure virt-who on each hypervisor in Red Hat Enterprise Linux Virtualization (KVM) (KVM), Red Hat Virtualization, or Red Hat OpenStack Platform. Prerequisites Register the hypervisor to Red Hat Subscription Management. If you are using Red Hat Virtualization Host (RHVH), update it to the latest version so that the minimum virt-who version is available. Virt-who is available by default on RHVH, but cannot be updated individually from the rhel-7-server-rhvh-4-rpms repository. Procedure Install virt-who on the hypervisor: Optional: Edit the /etc/virt-who.conf file to change or add global settings. Because virt-who is installed locally, these settings apply only to this hypervisor. Change the value of VIRTWHO_INTERVAL to specify how often, in minutes, virt-who queries the hypervisor. Because the virtual machines are granted temporary subscriptions for up to seven days, frequent queries are not required; you can select an interval that suits the size of your environment. Once a day ( 1440 ) is suitable for most environments. If you want to use an HTTP proxy for virt-who communication, add a line specifying the proxy: If you do not want to use an HTTP proxy for any virt-who communication from this server, add the following line: Note NO_PROXY=* can be used but only in /etc/sysconfig/virt-who . NO_PROXY is not a valid configuration in /etc/virt-who.conf . Copy the template configuration file to a new individual configuration file: Edit the configuration file you just created, changing the example values to those specific to your configuration: 1 The name does not need to be unique, because this configuration file is the only one managed by this instance of virt-who. 2 Specifies that this virt-who connection is to a Red Hat hypervisor. 3 The organization the hypervisor belongs to. You can find the organization by running subscription-manager orgs on the hypervisor. 4 Specifies how to identify the hypervisor. Use hostname to provide meaningful host names to Subscription Management. Alternatively, you can use uuid to avoid duplication if a hypervisor is renamed. Do not use hwuuid for an individual hypervisor. Start and enable the virt-who service: Repeat these steps for each hypervisor. 6.5.2. Configuring virt-who to connect to VMware vCenter Use this procedure to configure virt-who to connect to a VMware vCenter Server. Prerequisites Create a read-only virt-who user on the vCenter Server. The virt-who user requires at least read-only access to all objects in the vCenter Data Center. Prepare a virt-who host on a Red Hat Enterprise Linux server. Procedure On the virt-who host, encrypt the virt-who user's password with the virt-who-password utility: When prompted, enter the password of the virt-who user, then make a note of the encrypted form of the password. Copy the template configuration file to a new individual configuration file: To make it easy to identify the configuration file when troubleshooting, use the VMware vCenter host name as the new file's name. In this example, the host name is vcenter1 . Edit the configuration file you just created, changing the example values with those specific to your configuration: 1 The name must be unique for each individual configuration file. Use the vCenter Server host name to make it easy to identify the configuration file for each hypervisor. 2 Specifies that this virt-who connection is to a VMware vCenter Server. 3 The FQDN of the vCenter Server. 4 The name of the virt-who user on the vCenter Server. 5 The encrypted password of the virt-who user. 6 The organization the hypervisors belong to. You can find the organization by running subscription-manager orgs on a hypervisor. 7 Specifies how to identify the hypervisors. Use hostname to provide meaningful host names to Subscription Management. Alternatively, you can use uuid or hwuuid to avoid duplication if a hypervisor is renamed. 8 If some hypervisors never run Red Hat Enterprise Linux virtual machines, those hypervisors do not need to be reported by virt-who. You can filter hypervisors using one of the following options. Wildcards and regular expressions are supported. If a name contains special characters, enclose it in quotation marks. filter_hosts or exclude_hosts : Provide a comma-separated list of hypervisors according to the specified hypervisor_id . For example, if hypervisors are identified by their host name, they must be included or excluded by their host name. filter_host_parents or exclude_host_parents : Provide a comma-separated list of clusters. Hypervisors in a filtered cluster are reported by virt-who. Hypervisors in an excluded cluster are not reported by virt-who. Restart the virt-who service: Repeat these steps for each vCenter Server. 6.5.3. Configuring virt-who to connect to Microsoft Hyper-V Use this procedure to configure virt-who to connect to a Microsoft Hyper-V hypervisor. Prerequisites Red Hat Enterprise Linux 9 or later. Prepare a virt-who host on a Red Hat Enterprise Linux server. Enable basic authentication mode for the hypervisor. Enable remote management on the hypervisor. Create a read-only virt-who user on the hypervisor. Procedure On the virt-who host, encrypt the password of the hypervisor's virt-who user with the virt-who-password utility: When prompted, enter the password of the virt-who user, then make a note of the encrypted form of the password. Copy the template configuration file to a new individual configuration file: To make it easy to identify the configuration file when troubleshooting, use the hypervisor's host name as the new file's name. In this example, the host name is hyperv1 . Edit the configuration file you just created, changing the example values with those specific to your configuration: 1 The name must be unique for each individual configuration file. Use the hypervisor's host name to make it easy to identify the configuration file for each hypervisor. 2 Specifies that this virt-who connection is to a Microsoft Hyper-V hypervisor. 3 The FQDN of the Hyper-V hypervisor. 4 The name of the virt-who user on the hypervisor. 5 The encrypted password of the virt-who user. 6 The organization this hypervisor belongs to. You can find the organization by running subscription-manager orgs on the hypervisor. 7 Specifies how to identify the hypervisor. Use hostname to provide meaningful host names to Subscription Management. Alternatively, you can use uuid to avoid duplication if a hypervisor is renamed. Do not use hwuuid for an individual hypervisor. Restart the virt-who service: Repeat these steps for each hypervisor. 6.5.4. Configuring virt-who to connect to OpenShift Virtualization Supported Platforms OpenShift Virtualization supported status by virt-who: virt-who-0.28.x-1.el7 (RHEL 7.9) virt-who-0.29.x-1.el8 (RHEL 8.3) Procedure In the cluster you want to subscribe, create a project and a service account named virt-who: Create cluster roles to list nodes and virtual machine Instances. Create cluster role bindings. Verify that the virt-who system account has the permissions to list all running VMs: Install virt-who on a host, which can be a VM running on OpenShift Virtualization itself: Find your owner number on a subscribed host: Copy the template configuration file to a new individual configuration file. To make it easy to identify the configuration file when troubleshooting, use the hostname of the cluster API. In this example, the host name is openshift-cluster-1 . Get the token of the virt-who service account: If /usr/bin/oc is not available, install /usr/bin/oc and use the token to log in and to create a valid kubeconfig file. You must specify the cluster api by including the url. For example: To use the OpenShift Virtualization certificate-authority (CA) certificate in the kubeconfig file, extract it from the cluster and save it to a file on the system running virt-who as the controller daemon: Change the kubeconfig file to include the extracted CA certificate. For example: Before starting the service, you can test the configuration manually: Note If the jq program is installed, you can use it to make the output easier to read: # virt-who --print | jq Enable the virt-who service: Restart the virt-who service to use the new configuration. Virt-who logs are available in /var/log/rhsm/rhsm.log . In this file, you can view configuration or connectivity errors. 6.6. Registering virtual machines to use a host-based subscription Register virtual machines with auto-attach so that they inherit a subscription from their hypervisor. Prerequisites Attach a host-based subscription to the virtual machine's hypervisor. Configure virt-who to query the virtual machine's hypervisor. Ensure that all hypervisors the virtual machine can migrate to have host-based subscriptions attached and report to virt-who, or limit the virtual machine's migration to specific hypervisors. Web UI procedure Log in to the Customer Portal at https://access.redhat.com . Navigate to Subscriptions > Systems and click the name of the virtual machine. Click the Subscriptions tab. Click Run Auto-Attach . Repeat these steps for each virtual machine. CLI procedure Register the virtual machine using the auto-attach option: When prompted, enter your user name and password. Repeat these steps for each virtual machine. If the virtual machine has already been reported by virt-who, the virtual machine inherits a subscription from its hypervisor. If the virtual machine has not been reported by virt who, the virtual machine receives a temporary subscription while Subscription Management waits for virt-who to provide information about which hypervisor the virtual machine is running on. After virt-who provides this information, the virtual machine inherits a subscription from its hypervisor. 6.7. Virt-who troubleshooting methods Verifying virt-who status Verify the status of the virt-who service: Debug logging Check the /var/log/rhsm/rhsm.log file, where virt-who logs all its activity by default. For more detailed logging, enable the debugging option in the /etc/virt-who.conf file: Restart the virt-who service for the change to take effect. When the underlying issue is resolved, modify the /etc/virt-who.conf file to disable debugging, then restart the virt-who service. Testing configuration options Make a change and test the result, repeating as needed. Virt-who provides three options to help test the configuration files, credentials, and connectivity to the virtualization platform: The virt-who --one-shot command reads the configuration files, retrieves the list of virtual machines and sends it to the subscription management system, then exits immediately. The virt-who --print command reads the configuration files and prints the list of virtual machines, but does not send it to the subscription management system. Starting with RHEL 9 Beta, the virt-who --status command reads the configuration files and outputs a summary of the connection status for both the source and destination systems. The virt-who --status command with the --json option provides additional connectivity data, in JSON format, for each configuration. The expected output of the virt-who --one-shot and virt-who --print commands is a list of hypervisors and their virtual machines, in JSON format. The following is an extract from a VMware vSphere instance. The output from all hypervisors follows the same structure. The expected output for the virt-who --status command is a plain-text summary of the connection status for each configuration in virt-who. The expected output for the virt-who --status command with the --json option provides additional information about each configuration, including its last successful run, in JSON format. This output also includes details about the success or failure status of each configuration. When the status report indicates a configuration success, the JSON output includes the number of hypervisors and guests that virt-who reported during its last successful run cycle. When the status report indicates a configuration failure, the JSON output includes the associated error message. The virt-who --status command can also be used with the --debug and --config options to provide additional information about the configuration files. Identifying issues when using multiple virt-who configuration files If you have multiple virt-who configuration files on one server, move one file at a time to a different directory while testing after each file move. If the issue no longer occurs, the cause is associated with the most recently moved file. After you have resolved the issue, return the virt-who configuration files to their original location. Alternatively, you can test an individual file after moving it by using the --config option to specify its location. For example: Starting with RHEL 9 Beta, you can enter virt-who --status with the --debug and --config options to identify the configuration file causing the issue without removing any other files from the directory. For example: You can also enter the command with the --json option to view more detailed information about each configuration in JSON format. For example: Identifying duplicate hypervisors Duplicate hypervisors can cause subscription and entitlement errors. Enter the following commands to check for duplicate hypervisors: In this example, three hypervisors have the same FQDN ( localhost ), and must be corrected to use unique FQDNs. Identifying duplicate virtual machines Enter the following commands to check for duplicate virtual machines: Checking the number of hypervisors Enter the following commands to check the number of hypervisors virt-who currently reports: Starting with RHEL 9 Beta, enter the following command to check the number of hypervisors that virt-who reported during its last successful run cycle: Checking the number of virtual machines Enter the following commands to check the number of virtual machines that virt-who currently reports: Starting with RHEL 9 Beta, enter the following command to check the number of guests that virt-who reported during its last successful run cycle: 6.8. Virt-who troubleshooting scenarios Virt-who fails to connect to the virtualization platform If virt-who fails to connect to the hypervisor or virtualization manager, check the Red Hat Subscription Manager log file /var/log/rhsm/rhsm.log . If you find the message No route to host , the hypervisor might be listening on the wrong port. In this case, modify the virt-who configuration file for that hypervisor and append the correct port number to the server value. You must restart the virt-who service after modifying a configuration file. Virt-who fails to connect to the virtualization platform through an HTTP proxy on the local network If virt-who cannot connect to the hypervisor or virtualization manager through an HTTP proxy, either configure the proxy to allow local traffic to pass through, or modify the virt-who service to use no proxy by adding the following line to ''/etc/virt-who.conf': You must restart the virt-who service after modifying a configuration file. Note The section [system_environment] is only supported from virt-who-0.30.x-1.el8 (RHEL 8.4), if you are using the old virt-who version, please set NO_PROXY by /etc/sysconfig/virt-who.
|
[
"[hypervisor1] type=hyperv server=hypervisor1.example.com username=virt_who_user encrypted_password=bd257f93d@482B76e6390cc54aec1a4d hypervisor_id=hostname owner=1234567",
"[global] interval=3600 1 debug=True 2",
"[defaults] owner=1234567 1 hypervisor_id=hostname 2",
"[system_environment] http_proxy= https://proxy.example.com:443 1 no_proxy=* 2",
"subscription-manager list --all --available --matches ' Host-based Subscription Name '",
"subscription-manager attach --pool= Pool_ID",
"subscription-manager list --consumed",
"subscription-manager register --auto-attach",
"firewall-cmd --add-port=\"443/tcp\" firewall-cmd --add-port=\"443/tcp\" --permanent",
"yum install virt-who",
"http_proxy= https://proxy.example.com:443",
"NO_PROXY=*",
"systemctl enable --now virt-who",
"yum install virt-who",
"http_proxy= https://proxy.example.com:443",
"NO_PROXY=*",
"cp /etc/virt-who.d/template.conf /etc/virt-who.d/ local.conf",
"[local] 1 type=libvirt 2 owner=1234567 3 hypervisor_id=hostname 4",
"systemctl enable --now virt-who",
"virt-who-password",
"cp /etc/virt-who.d/template.conf /etc/virt-who.d/ vcenter1 .conf",
"[vcenter1] 1 type=esx 2 server=vcenter1.example.com 3 username=virt_who_user 4 encrypted_password=bd257f93d@482B76e6390cc54aec1a4d 5 owner=1234567 6 hypervisor_id=hostname 7 filter_hosts=esx1.example.com, esx2.example.com 8",
"systemctl restart virt-who",
"virt-who-password",
"cp /etc/virt-who.d/template.conf /etc/virt-who.d/ hyperv1 .conf",
"[hyperv1] 1 type=hyperv 2 server=hyperv1.example.com 3 username=virt_who_user 4 encrypted_password=bd257f93d@482B76e6390cc54aec1a4d 5 owner=1234567 6 hypervisor_id=hostname 7",
"systemctl restart virt-who",
"oc new-project virt-who oc create serviceaccount virt-who",
"oc create clusterrole lsnodes --verb=list --resource=nodes oc create clusterrole lsvmis --verb=list --resource=vmis",
"oc adm policy add-cluster-role-to-user lsnodes system:serviceaccount:virt-who:virt-who oc adm policy add-cluster-role-to-user lsvmis system:serviceaccount:virt-who:virt-who",
"oc get vmis -A --as=system:serviceaccount:virt-who:virt-who",
"[virtwho-host]USD yum install virt-who",
"subscription-manager orgs",
"cp /etc/virt-who.d/template.conf /etc/virt-who.d/openshift-cluster-1.conf [cnv] type=kubevirt kubeconfig=/root/.kube/config hypervisor_id=hostname owner=<owner_number>",
"oc serviceaccounts get-token virt-who",
"oc login https://api.testcluster-1.example.org:6443 --token=<token>",
"get secret -n openshift-kube-apiserver-operator loadbalancer-serving-signer -o jsonpath='{.data.tls\\.crt}' | base64 -d > USDcluster-ca.pem",
"[virtwho-host]USD cat /root/.kube/config apiVersion: v1 clusters: - cluster: server: https://api.testcluster.example.org:6443 certificate-authority: /root/testcluster-ca.pem name: api-testcluster-example-org:6443 contexts: - context: cluster: api-test-cluster-example-org:6443 namespace: default",
"virt-who --print",
"systemctl enable virt-who",
"systemctl restart virt-who",
"subscription-manager register --auto-attach",
"systemctl status virt-who.service",
"[global] debug=True",
"{ \"guestId\": \"422f24ed-71f1-8ddf-de53-86da7900df12\", \"state\": 5, \"attributes\": { \"active\": 0, \"virtWhoType\": \"esx\", \"hypervisorType\": \"vmware\" } },",
"+-------------------------------------------+ Configuration Status +-------------------------------------------+ Configuration Name: esx_config1 Source Status: success Destination Status: success Configuration Name: hyperv-55 Source Status: failure Destination Status: failure",
"\"configurations\": [ { \"name\":\"esx-conf1\", \"source\":{ \"connection\":\"https://esx_system.example.com\", \"status\":\"success\", \"last_successful_retrieve\":\"2020-02-28 07:25:25 UTC\", \"hypervisors\":20, \"guests\":37 }, \"destination\":{ \"connection\":\"candlepin.example.com\", \"status\":\"success\", \"last_successful_send\":\"2020-02-28 07:25:27 UTC\", \"last_successful_send_job_status\":\"FINISHED\" } }, { \"name\":\"hyperv-55\", \"source\":{ \"connection\":\"windows10-3.company.com\", \"status\":\"failure\", \"message\":\"Unable to connect to server: invalid credentials\", \"last_successful_retrieve\":null }, \"destination\":{ \"connection\":\"candlepin.company.com\", \"status\":\"failure\", \"message\":\"ConnectionRefusedError: [Errno 111] Connection refused\", \"last_successful_send\":null, \"last_successful_send_job_status\":null } } ] }",
"virt-who --debug --one-shot --config /tmp/ conf_name .conf",
"#virt-who --debug --status --config /tmp/conf_name.conf",
"#virt-who --debug --status --json --config /tmp/conf_name.conf",
"systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep name | sort | uniq -c | sort -nr | head -n10 3 \"name\": \"localhost\" 1 \"name\": \"rhel1.example.com\" 1 \"name\": \"rhel2.example.com\" 1 \"name\": \"rhel3.example.com\" 1 \"name\": \"rhel4.example.com\" 1 \"name\": \"rhvh1.example.com\" 1 \"name\": \"rhvh2.example.com\" 1 \"name\": \"rhvh3.example.com\" 1 \"name\": \"rhvh4.example.com\" 1 \"name\": \"rhvh5.example.com\"",
"systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep \"guestId\" | sort | uniq -c | sort -nr | head -n10",
"systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep name | sort | uniq -c | wc -l",
"virt-who --status --json",
"systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep \"guestId\" | sort | uniq -c | wc -l",
"virt-who --status --json",
"[system_environment] no_proxy=*"
] |
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/adv-reg-rhel-config-vm-sub_
|
Installing on vSphere
|
Installing on vSphere OpenShift Container Platform 4.15 Installing OpenShift Container Platform on vSphere Red Hat OpenShift Documentation Team
|
[
"platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> 8 datastore: \"/<datacenter>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <datacenter> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> 8 datastore: \"/<datacenter>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <datacenter> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> 8 datastore: \"/<datacenter>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <datacenter> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator",
"oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w",
"NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s",
"oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}",
"16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader",
"oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator",
"I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed",
"oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID",
"/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: vsphere:",
"platform: vsphere: apiVIPs:",
"platform: vsphere: diskType:",
"platform: vsphere: failureDomains:",
"platform: vsphere: failureDomains: name:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: computeCluster:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: failureDomains: topology template:",
"platform: vsphere: ingressVIPs:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: apiVIP:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: ingressVIP:",
"platform: vsphere: network:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"platform: vsphere: clusterOSImage:",
"platform: vsphere: osDisk: diskSizeGB:",
"platform: vsphere: cpus:",
"platform: vsphere: coresPerSocket:",
"platform: vsphere: memoryMB:",
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_vsphere/index
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.7/proc-providing-feedback-on-redhat-documentation
|
Chapter 17. Service [v1]
|
Chapter 17. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 17.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 17.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 17.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 17.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 17.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 17.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 17.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 17.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 17.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 17.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 17.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 17.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 17.2.1. /api/v1/services Table 17.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Service Table 17.2. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 17.2.2. /api/v1/watch/services Table 17.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 17.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.3. /api/v1/namespaces/{namespace}/services Table 17.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Service Table 17.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.8. Body parameters Parameter Type Description body DeleteOptions schema Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 17.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body Service schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 17.2.4. /api/v1/watch/namespaces/{namespace}/services Table 17.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 17.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 17.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 17.18. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Service Table 17.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.21. Body parameters Parameter Type Description body DeleteOptions schema Table 17.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 17.23. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 17.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.25. Body parameters Parameter Type Description body Patch schema Table 17.26. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 17.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.28. Body parameters Parameter Type Description body Service schema Table 17.29. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 17.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 17.30. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 17.33. Global path parameters Parameter Type Description name string name of the Service namespace string object name and auth scope, such as for teams and projects Table 17.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Service Table 17.35. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 17.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.37. Body parameters Parameter Type Description body Patch schema Table 17.38. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 17.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.40. Body parameters Parameter Type Description body Service schema Table 17.41. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/service-v1
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/operator_guide/making-open-source-more-inclusive
|
Post-installation configuration
|
Post-installation configuration OpenShift Container Platform 4.13 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}",
"oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}'",
"dns.config.openshift.io/cluster patched",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"oc get machine -n openshift-machine-api",
"NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m",
"oc edit machines -n openshift-machine-api <control_plane_name> 1",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"bmc: address: credentialsName: disableCertificateVerification:",
"image: url: checksum: checksumType: format:",
"raid: hardwareRAIDVolumes: softwareRAIDVolumes:",
"spec: raid: hardwareRAIDVolume: []",
"rootDeviceHints: deviceName: hctl: model: vendor: serialNumber: minSizeGigabytes: wwn: wwnWithExtension: wwnVendorExtension: rotational:",
"hardware: cpu arch: model: clockMegahertz: flags: count:",
"hardware: firmware:",
"hardware: nics: - ip: name: mac: speedGbps: vlans: vlanId: pxe:",
"hardware: ramMebibytes:",
"hardware: storage: - name: rotational: sizeBytes: serialNumber:",
"hardware: systemVendor: manufacturer: productName: serialNumber:",
"provisioning: state: id: image: raid: firmware: rootDeviceHints:",
"oc get bmh -n openshift-machine-api -o yaml",
"oc get bmh -n openshift-machine-api",
"oc get bmh <host_name> -n openshift-machine-api -o yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: \"2022-06-16T10:48:33Z\" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: \"30099\" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: \"\" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: \"0:0:0:0\" model: VK000960GWTTB name: /dev/disk/by-id/scsi-<serial_number> sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 lastUpdated: \"2022-06-16T11:41:42Z\" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: \"\" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\"",
"spec: settings: ProcTurboMode: Disabled 1",
"status: conditions: - lastTransitionTime: message: observedGeneration: reason: status: type:",
"status: schema: name: namespace: lastUpdated:",
"status: settings:",
"oc get hfs -n openshift-machine-api -o yaml",
"oc get hfs -n openshift-machine-api",
"oc get hfs <host_name> -n openshift-machine-api -o yaml",
"oc get hfs -n openshift-machine-api",
"oc edit hfs <host_name> -n openshift-machine-api",
"spec: settings: name: value 1",
"oc get bmh <host_name> -n openshift-machine name",
"oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api",
"oc get nodes",
"oc get machinesets -n openshift-machine-api",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>",
"oc get hfs -n openshift-machine-api",
"oc describe hfs <host_name> -n openshift-machine-api",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo",
"<BIOS_setting_name> attribute_type: allowable_values: lower_bound: upper_bound: min_length: max_length: read_only: unique:",
"oc get firmwareschema -n openshift-machine-api",
"oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml",
"oc adm release info -o json | jq .metadata.metadata",
"\"release.openshift.io/architecture\": \"multi\"",
"null",
"az login",
"az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1",
"az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')",
"BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd",
"end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`",
"sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`",
"az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}",
"az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy",
"{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }",
"az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}",
"az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2",
"RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)",
"az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}",
"az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0",
"/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0",
"oc create -f arm64-machine-set-0.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-arm64-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-arm64-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc create -f aws-arm64-machine-set-0.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-arm64-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-arm64-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'",
"oc get istag cli-artifacts:latest -n openshift -oyaml",
"dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m",
"oc describe mcp worker",
"Last Transition Time: 2021-12-20T18:54:00Z Message: Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: \"content mismatch for file \\\"/etc/mco-test-file\\\"\" 1 Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded 2",
"oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4",
"Annotations: cloud.network.openshift.io/egress-ipconfig: [{\"interface\":\"nic0\",\"ifaddr\":{\"ipv4\":\"10.0.128.0/17\"},\"capacity\":{\"ip\":10}}] csi.volume.kubernetes.io/nodeid: {\"pd.csi.storage.gke.io\":\"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4\"} machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/reason: content mismatch for file \"/etc/mco-test-file\" 1 machineconfiguration.openshift.io/state: Degraded 2",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m",
"oc describe mcp worker",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3",
"oc get machineconfigs",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m",
"oc describe machineconfigs 01-master-kubelet",
"Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\",
"oc delete -f ./myconfig.yaml",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\"",
"oc create -f disable-chronyd.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"oc create -f ./99-worker-kargs-mpath.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF",
"oc create -f 99-worker-realtime.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.26.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.26.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0",
"oc debug node/ip-10-0-143-147.us-east-2.compute.internal",
"Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"oc delete -f 99-worker-realtime.yaml",
"variant: openshift version: 4.13.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit",
"cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF",
"oc create -f 80-extensions.yaml",
"oc get machineconfig 80-worker-extensions",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.26.0",
"oc debug node/ip-10-0-169-2.us-east-2.compute.internal",
"To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm",
"variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4",
"butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>",
"oc apply -f 98-worker-firmware-blob.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-kubelet-config -o yaml",
"spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc get ctrcfg",
"NAME AGE ctr-overlay 15m ctr-level 5m45s",
"oc get mc | grep container",
"01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: logLevel: debug 2 overlaySize: 8G 3 defaultRuntime: \"crun\" 4",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 logLevel: debug overlaySize: 8G",
"oc create -f <file_name>.yaml",
"oc get ContainerRuntimeConfig",
"NAME AGE overlay-size 3m19s",
"oc get machineconfigs | grep containerrun",
"99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# crio config | grep 'log_level'",
"log_level = \"debug\"",
"sh-4.4# head -n 7 /etc/containers/storage.conf",
"[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: logLevel: debug overlaySize: 8G",
"oc apply -f overlaysize.yml",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"",
"oc get machineconfigs",
"99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h",
"head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /",
"oc get ctrcfg",
"NAME AGE ctr-overlay 15m ctr-level 5m45s",
"cat /proc/1/status | grep Cap",
"capsh --decode=<decode_CapBnd_value> 1",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.26.0",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.26.0",
"kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: \"node-role.kubernetes.io/edge\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name>",
"apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application>",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.26.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v2\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1=\"all\" 2 - psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.26.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.26.0",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"cgroup2fs",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit apiserver",
"spec: encryption: type: aesgcm 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"etcd member has been defragmented: <member_name> , memberID: <member_id>",
"failed defrag on member: <member_name> , memberID: <member_id> : <error_message>",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.26.0 host-172-25-75-38 Ready infra,worker 3d20h v1.26.0 host-172-25-75-40 Ready master 3d20h v1.26.0 host-172-25-75-65 Ready master 3d20h v1.26.0 host-172-25-75-74 Ready infra,worker 3d20h v1.26.0 host-172-25-75-79 Ready worker 3d20h v1.26.0 host-172-25-75-86 Ready worker 3d20h v1.26.0 host-172-25-75-98 Ready infra,worker 3d20h v1.26.0",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2>",
"oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>'",
"sudo rm -f /var/lib/ovn/etc/*.db",
"oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s",
"oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"oc project openshift-machine-api",
"oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }",
"oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4",
"oc create -f <file-name>.yaml",
"oc get machineset",
"NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.26.0 ip-10-0-146-113.ec2.internal Ready master 127m v1.26.0 ip-10-0-153-35.ec2.internal Ready worker 118m v1.26.0 ip-10-0-176-58.ec2.internal Ready master 126m v1.26.0 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.26.0 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.26.0 ip-10-0-245-59.ec2.internal Ready worker 116m v1.26.0",
"oc debug node/<node-name> -- chroot /host lsblk",
"oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-kubelet-config -o yaml",
"spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io",
"oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=USD(oc adm release info --image-for must-gather)",
"get imagestreams -nopenshift",
"oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift",
"1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12",
"oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift",
"oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift",
"get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift",
"Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.13 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/",
"update-ca-trust",
"oc extract secret/pull-secret -n openshift-config --confirm --to=.",
".dockerconfigjson",
"{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}",
"{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64",
"info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release",
"oc image mirror <online_registry>/my/image:latest <mirror_registry>",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson",
"oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config",
"S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /var/lib/kubelet/config.json",
"{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1",
"sh-4.4# cd /etc/docker/certs.d/",
"sh-4.4# ls",
"image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"",
"sh-4.4# exit",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.26.0 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.26.0 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.26.0 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.26.0 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.26.0 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.26.0",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson",
"oc get co insights",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d",
"oc get imagecontentsourcepolicy",
"NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h",
"oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>",
"oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0",
"imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1",
"oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'",
"{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}",
"oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1",
"oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'",
"[\"openshift-samples\"]",
"oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'",
"oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'",
"{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3",
"ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo /sbin/mpathconf --enable",
"sudo multipath",
"sudo fdisk /dev/mapper/mpatha",
"sudo multipath -II",
"mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running",
"Using a 4.12.0 image FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 #Install hotfix rpm RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && rpm-ostree cleanup -m && ostree container commit",
"FROM quay.io/openshift-release-dev/ocp-release@sha256 ADD configure-firewall-playbook.yml . RUN rpm-ostree install firewalld ansible && ansible-playbook configure-firewall-playbook.yml && rpm -e ansible && ostree container commit",
"Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit",
"Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit",
"Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos` hadolint ignore=DL3006 FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256 Install our config file COPY my-host-to-host.conf /etc/ipsec.d/ RHEL entitled host is needed here to access RHEL packages Install libreswan as extra RHEL package RUN rpm-ostree install libreswan && systemctl enable ipsec && ostree container commit",
"Using a 4.13.0 image FROM quay.io/openshift-release/ocp-release@sha256... 1 #Install hotfix rpm RUN rpm-ostree cliwrap install-to-root / && \\ 2 rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \\ 3 rpm-ostree cleanup -m && ostree container commit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: os-layer-custom spec: osImageURL: quay.io/my-registry/custom-image@sha256... 2",
"oc create -f <file_name>.yaml",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-master-ssh 3.2.0 98m 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m 99-worker-ssh 3.2.0 98m os-layer-custom 10s 1 rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s 2",
"oc describe mc rendered-worker-5de4837625b1cbc237de6b22bc0bc873",
"Name: rendered-worker-5de4837625b1cbc237de6b22bc0bc873 Namespace: Labels: <none> Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 5bdb57489b720096ef912f738b46330a8f577803 machineconfiguration.openshift.io/release-image-version: {product-version}.0-ec.3 API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Os Image URL: quay.io/my-registry/custom-image@sha256",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-15961f1da260f7be141006404d17d39b True False False 3 3 3 0 39m worker rendered-worker-5de4837625b1cbc237de6b22bc0bc873 True False False 3 0 0 0 39m 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0",
"oc debug node/ip-10-0-155-125.us-west-1.compute.internal",
"sh-4.4# chroot /host",
"sh-4.4# sudo rpm-ostree status",
"State: idle Deployments: * ostree-unverified-registry:quay.io/my-registry/ Digest: sha256:",
"oc delete mc os-layer-custom",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.26.0 ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.26.0 ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.26.0 ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.26.0",
"oc debug node/ip-10-0-155-125.us-west-1.compute.internal",
"sh-4.4# chroot /host",
"sh-4.4# sudo rpm-ostree status",
"State: idle Deployments: * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73 Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/post-installation_configuration/index
|
Chapter 5. Working with containers
|
Chapter 5. Working with containers Containers represent a running or stopped process created from the files located in a decompressed container image. You can use the Podman tool to work with containers. 5.1. Podman run command The podman run command runs a process in a new container based on the container image. If the container image is not already loaded then podman run pulls the image, and all image dependencies, from the repository in the same way running podman pull image , before it starts the container from that image. The container process has its own file system, its own networking, and its own isolated process tree. The podman run command has the form: Basic options are: --detach (-d) : Runs the container in the background and prints the new container ID. --attach (-a) : Runs the container in the foreground mode. --name (-n) : Assigns a name to the container. If a name is not assigned to the container with --name then it generates a random string name. This works for both background and foreground containers. --rm : Automatically remove the container when it exits. Note that the container will not be removed when it could not be created or started successfully. --tty (-t) : Allocates and attaches the pseudo-terminal to the standard input of the container. --interactive (-i) : For interactive processes, use -i and -t together to allocate a terminal for the container process. The -i -t is often written as -it . 5.2. Running commands in a container from the host Use the podman run command to display the type of operating system of the container. Prerequisites The container-tools module is installed. Procedure Display the type of operating system of the container based on the registry.access.redhat.com/ubi8/ubi container image using the cat /etc/os-release command: Optional: List all containers. Because of the --rm option you should not see any container. The container was removed. Additional resources podman-run man page on your system 5.3. Running commands inside the container Use the podman run command to run a container interactively. Prerequisites The container-tools module is installed. Procedure Run the container named myubi based on the registry.redhat.io/ubi8/ubi image: The -i option creates an interactive session. Without the -t option, the shell stays open, but you cannot type anything to the shell. The -t option opens a terminal session. Without the -i option, the shell opens and then exits. Install the procps-ng package containing a set of system utilities (for example ps , top , uptime , and so on): Use the ps -ef command to list current processes: Enter exit to exit the container and return to the host: Optional: List all containers: You can see that the container is in Exited status. Additional resources podman-run man page on your system 5.4. Listing containers Use the podman ps command to list the running containers on the system. Prerequisites The container-tools module is installed. Procedure Run the container based on registry.redhat.io/rhel8/rsyslog image: List all containers: To list all running containers: To list all containers, running or stopped: If there are containers that are not running, but were not removed ( --rm option), the containers are present and can be restarted. Additional resources podman-ps man page on your system 5.5. Starting containers If you run the container and then stop it, and not remove it, the container is stored on your local system ready to run again. You can use the podman start command to re-run the containers. You can specify the containers by their container ID or name. Prerequisites The container-tools module is installed. At least one container has been stopped. Procedure Start the myubi container: In the non interactive mode: Alternatively, you can use podman start 1984555a2c27 . In the interactive mode, use -a ( --attach ) and -i ( --interactive ) options to work with container bash shell: Alternatively, you can use podman start -a -i 1984555a2c27 . Enter exit to exit the container and return to the host: Additional resources podman-start man page on your system 5.6. Inspecting containers from the host Use the podman inspect command to inspect the metadata of an existing container in a JSON format. You can specify the containers by their container ID or name. Prerequisites The container-tools module is installed. Procedure Inspect the container defined by ID 64ad95327c74: To get all metadata: To get particular items from the JSON file, for example, the StartedAt timestamp: The information is stored in a hierarchy. To see the container StartedAt timestamp ( StartedAt is under State ), use the --format option and the container ID or name. Examples of other items you might want to inspect include: .Path to see the command run with the container .Args arguments to the command .Config.ExposedPorts TCP or UDP ports exposed from the container .State.Pid to see the process id of the container .HostConfig.PortBindings port mapping from container to host Additional resources podman-inspect man page on your system 5.7. Mounting directory on localhost to the container You can make log messages from inside a container available to the host system by mounting the host /dev/log device inside the container. Prerequisites The container-tools module is installed. Procedure Run the container named log_test and mount the host /dev/log device inside the container: Use the journalctl utility to display logs: The --rm option removes the container when it exits. Additional resources podman-run man page on your system 5.8. Mounting a container filesystem Use the podman mount command to mount a working container root filesystem in a location accessible from the host. Prerequisites The container-tools module is installed. Procedure Run the container named mysyslog : Optional: List all containers: Mount the mysyslog container: Display the content of the mount point using ls command: Display the OS version: Additional resources podman-mount man page on your system 5.9. Running a service as a daemon with a static IP The following example runs the rsyslog service as a daemon process in the background. The --ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). After that, you can run the podman inspect command to check that you set the IP address properly. Prerequisites The container-tools module is installed. Procedure Set the container network interface to the IP address 10.88.0.44: Check that the IP address is set properly: Additional resources podman-inspect and podman-run man pages on your system 5.10. Executing commands inside a running container Use the podman exec command to execute a command in a running container and investigate that container. The reason for using the podman exec command instead of podman run command is that you can investigate the running container without interrupting the container activity. Prerequisites The container-tools module is installed. The container is running. Procedure Execute the rpm -qa command inside the myrsyslog container to list all installed packages: Execute a /bin/bash command in the myrsyslog container: Install the procps-ng package containing a set of system utilities (for example ps , top , uptime , and so on): Inspect the container: To list every process on the system: To display file system disk space usage: To display system information: To display amount of free and used memory in megabytes: Additional resources podman-exec man page on your system 5.11. Sharing files between two containers You can use volumes to persist data in containers even when a container is deleted. Volumes can be used for sharing data among multiple containers. The volume is a folder which is stored on the host machine. The volume can be shared between the container and the host. Main advantages are: Volumes can be shared among the containers. Volumes are easier to back up or migrate. Volumes do not increase the size of the containers. Prerequisites The container-tools module is installed. Procedure Create a volume: Display information about the volume: Notice that it creates a volume in the volumes directory. You can save the mount point path to the variable for easier manipulation: USD mntPoint=USD(podman volume inspect hostvolume --format {{.Mountpoint}}) . Notice that if you run sudo podman volume create hostvolume , then the mount point changes to /var/lib/containers/storage/volumes/hostvolume/_data . Create a text file inside the directory using the path that is stored in the mntPoint variable: List all files in the directory defined by the mntPoint variable: Run the container named myubi1 and map the directory defined by the hostvolume volume name on the host to the /containervolume1 directory on the container: Note that if you use the volume path defined by the mntPoint variable ( -v USDmntPoint:/containervolume1 ), data can be lost when running podman volume prune command, which removes unused volumes. Always use -v hostvolume_name:/containervolume_name . List the files in the shared volume on the container: You can see the host.txt file which you created on the host. Create a text file inside the /containervolume1 directory: Detach from the container with CTRL+p and CTRL+q . List the files in the shared volume on the host, you should see two files: At this point, you are sharing files between the container and host. To share files between two containers, run another container named myubi2 . Run the container named myubi2 and map the directory defined by the hostvolume volume name on the host to the /containervolume2 directory on the container: List the files in the shared volume on the container: You can see the host.txt file which you created on the host and container1.txt which you created inside the myubi1 container. Create a text file inside the /containervolume2 directory: Detach from the container with CTRL+p and CTRL+q . List the files in the shared volume on the host, you should see three files: Additional resources podman-volume man page on your system 5.12. Exporting and importing containers You can use the podman export command to export the file system of a running container to a tarball on your local machine. For example, if you have a large container that you use infrequently or one that you want to save a snapshot of in order to revert back to it later, you can use the podman export command to export a current snapshot of your running container into a tarball. You can use the podman import command to import a tarball and save it as a filesystem image. Then you can run this filesystem image or you can use it as a layer for other images. Prerequisites The container-tools module is installed. Procedure Run the myubi container based on the registry.access.redhat.com/ubi8/ubi image: Optional: List all containers: Attach to the myubi container: Create a file named testfile : Detach from the container with CTRL+p and CTRL+q . Export the file system of the myubi as a myubi-container.tar on the local machine: Optional: List the current directory content: Optional: Create a myubi-container directory, extract all files from the myubi-container.tar archive. List a content of the myubi-directory in a tree-like format: You can see that the myubi-container.tar contains the container file system. Import the myubi.tar and saves it as a filesystem image: List all images: Display the content of the testfile file: Additional resources podman-export and podman-import man pages on your system 5.13. Stopping containers Use the podman stop command to stop a running container. You can specify the containers by their container ID or name. Prerequisites The container-tools module is installed. At least one container is running. Procedure Stop the myubi container: Using the container name: Using the container ID: To stop a running container that is attached to a terminal session, you can enter the exit command inside the container. The podman stop command sends a SIGTERM signal to terminate a running container. If the container does not stop after a defined period (10 seconds by default), Podman sends a SIGKILL signal. You can also use the podman kill command to kill a container (SIGKILL) or send a different signal to a container. Here is an example of sending a SIGHUP signal to a container (if supported by the application, a SIGHUP causes the application to re-read its configuration files): Additional resources podman-stop and podman-kill man pages on your system 5.14. Removing containers Use the podman rm command to remove containers. You can specify containers with the container ID or name. Prerequisites The container-tools module is installed. At least one container has been stopped. Procedure List all containers, running or stopped: Remove the containers: To remove the peaceful_hopper container: Notice that the peaceful_hopper container was in Exited status, which means it was stopped and it can be removed immediately. To remove the musing_brown container, first stop the container and then remove it: NOTE To remove multiple containers: To remove all containers from your local system: Additional resources podman-rm man page on your system 5.15. Creating SELinux policies for containers To generate SELinux policies for containers, use the UDICA tool. For more information, see Introduction to the udica SELinux policy generator . 5.16. Configuring pre-execution hooks in Podman You can create plugin scripts to define a fine-control over container operations, especially blocking unauthorized actions, for example pulling, running, or listing container images. Note The file /etc/containers/podman_preexec_hooks.txt must be created by an administrator and can be empty. If the /etc/containers/podman_preexec_hooks.txt does not exist, the plugin scripts will not be executed. The following rules apply to the plugin scripts: Have to be root-owned and not writable. Have to be located in the /usr/libexec/podman/pre-exec-hooks and /etc/containers/pre-exec-hooks directories. Execute in sequentially and alphanumeric order. If all plugin scripts return zero value, then the podman command is executed. If any of the plugin scripts return a non-zero value, it indicates a failure. The podman command exits and returns the non-zero value of the first-failed script. Red Hat recommends using the following naming convention to execute the scripts in the correct order: DDD_name.lang , where: The DDD is the decimal number indicating the order of script execution. Use one or two leading zeros if necessary. The name is the name of the plugin script. The lang (optional) is the file extension for the given programming language. For example, the name of the plugin script can be: 001-check-groups.sh . Note The plugin scripts are valid at the time of creation. Containers created before plugin scripts are not affected. Prerequisites The container-tools module is installed. Procedure Create the script plugin named 001-check-groups.sh . For example: The script checks if a user is in a specified group. The USER and GROUP are environment variables set by Podman. Exit code provided by the 001-check-groups.sh script would be provided to the podman binary. The podman command exits and returns the non-zero value of the first-failed script. Verification Check if the 001-check-groups.sh script works correctly: If the user is not in the correct group, the following error appears: 5.17. Debugging applications in containers You can use various command-line tools tailored to different aspects of troubleshooting. For more information, see Debugging applications in containers .
|
[
"run [options] image [command [arg ...]]",
"podman run --rm registry.access.redhat.com/ubi8/ubi cat /etc/os-release NAME=\"Red Hat Enterprise Linux\" ID=\"rhel\" HOME_URL=\"https://www.redhat.com/\" BUG_REPORT_URL=\"https://bugzilla.redhat.com/\" REDHAT_BUGZILLA_PRODUCT=\" Red Hat Enterprise Linux 8\"",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES",
"podman run --name=myubi -it registry.access.redhat.com/ubi8/ubi /bin/bash",
"yum install procps-ng",
"ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 12:55 pts/0 00:00:00 /bin/bash root 31 1 0 13:07 pts/0 00:00:00 ps -ef",
"exit",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1984555a2c27 registry.redhat.io/ubi8/ubi:latest /bin/bash 21 minutes ago Exited (0) 21 minutes ago myubi",
"podman run -d registry.redhat.io/rhel8/rsyslog",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 74b1da000a11 rhel8/rsyslog /bin/rsyslog.sh 2 minutes ago Up About a minute musing_brown",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA d65aecc325a4 ubi8/ubi /bin/bash 3 secs ago Exited (0) 5 secs ago peaceful_hopper false 74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute musing_brown false",
"podman start myubi",
"podman start -a -i myubi",
"exit",
"podman inspect 64ad95327c74 [ { \"Id\": \"64ad95327c740ad9de468d551c50b6d906344027a0e645927256cd061049f681\", \"Created\": \"2021-03-02T11:23:54.591685515+01:00\", \"Path\": \"/bin/rsyslog.sh\", \"Args\": [ \"/bin/rsyslog.sh\" ], \"State\": { \"OciVersion\": \"1.0.2-dev\", \"Status\": \"running\",",
"podman inspect --format='{{.State.StartedAt}}' 64ad95327c74 2021-03-02 11:23:54.945071961 +0100 CET",
"podman run --name=\"log_test\" -v /dev/log:/dev/log --rm registry.redhat.io/ubi8/ubi logger \"Testing logging to the host\"",
"journalctl -b | grep Testing Dec 09 16:55:00 localhost.localdomain root[14634]: Testing logging to the host",
"podman run -d --name=mysyslog registry.redhat.io/rhel8/rsyslog",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c56ef6a256f8 registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 20 minutes ago Up 20 minutes ago mysyslog",
"podman mount mysyslog /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged",
"ls /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var",
"cat /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged/etc/os-release NAME=\"Red Hat Enterprise Linux\" VERSION=\"8 (Ootpa)\" ID=\"rhel\" ID_LIKE=\"fedora\"",
"podman run -d --ip=10.88.0.44 registry.access.redhat.com/rhel8/rsyslog efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85",
"podman inspect efde5f0a8c723 | grep 10.88.0.44 \"IPAddress\": \"10.88.0.44\",",
"podman exec -it myrsyslog rpm -qa tzdata-2020d-1.el8.noarch python3-pip-wheel-9.0.3-18.el8.noarch redhat-release-8.3-1.0.el8.x86_64 filesystem-3.8-3.el8.x86_64",
"podman exec -it myrsyslog /bin/bash",
"yum install procps-ng",
"ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 10:23 ? 00:00:01 /usr/sbin/rsyslogd -n root 8 0 0 11:07 pts/0 00:00:00 /bin/bash root 47 8 0 11:13 pts/0 00:00:00 ps -ef",
"df -h Filesystem Size Used Avail Use% Mounted on fuse-overlayfs 27G 7.1G 20G 27% / tmpfs 64M 0 64M 0% /dev tmpfs 269M 936K 268M 1% /etc/hosts shm 63M 0 63M 0% /dev/shm",
"uname -r 4.18.0-240.10.1.el8_3.x86_64",
"free --mega total used free shared buff/cache available Mem: 2818 615 1183 12 1020 1957 Swap: 3124 0 3124",
"podman volume create hostvolume",
"podman volume inspect hostvolume [ { \"name\": \"hostvolume\", \"labels\": {}, \"mountpoint\": \"/home/username/.local/share/containers/storage/volumes/hostvolume/_data\", \"driver\": \"local\", \"options\": {}, \"scope\": \"local\" } ]",
"echo \"Hello from host\" >> USDmntPoint/host.txt",
"ls USDmntPoint/ host.txt",
"podman run -it --name myubi1 -v hostvolume:/containervolume1 registry.access.redhat.com/ubi8/ubi /bin/bash",
"ls /containervolume1 host.txt",
"echo \"Hello from container 1\" >> /containervolume1/container1.txt",
"ls USDmntPoint container1.rxt host.txt",
"podman run -it --name myubi2 -v hostvolume:/containervolume2 registry.access.redhat.com/ubi8/ubi /bin/bash",
"ls /containervolume2 container1.txt host.txt",
"echo \"Hello from container 2\" >> /containervolume2/container2.txt",
"ls USDmntPoint container1.rxt container2.txt host.txt",
"podman run -dt --name=myubi registry.access.redhat.com/8/ubi",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a6a6d4896142 registry.access.redhat.com/8:latest /bin/bash 7 seconds ago Up 7 seconds ago myubi",
"podman attach myubi",
"echo \"hello\" > testfile",
"podman export -o myubi.tar a6a6d4896142",
"ls -l -rw-r--r--. 1 user user 210885120 Apr 6 10:50 myubi-container.tar",
"mkdir myubi-container tar -xf myubi-container.tar -C myubi-container tree -L 1 myubi-container ├── bin -> usr/bin ├── boot ├── dev ├── etc ├── home ├── lib -> usr/lib ├── lib64 -> usr/lib64 ├── lost+found ├── media ├── mnt ├── opt ├── proc ├── root ├── run ├── sbin -> usr/sbin ├── srv ├── sys ├── testfile ├── tmp ├── usr └── var 20 directories, 1 file",
"podman import myubi.tar myubi-imported Getting image source signatures Copying blob 277cab30fe96 done Copying config c296689a17 done Writing manifest to image destination Storing signatures c296689a17da2f33bf9d16071911636d7ce4d63f329741db679c3f41537e7cbf",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myubi-imported latest c296689a17da 51 seconds ago 211 MB",
"podman run -it --name=myubi-imported docker.io/library/myubi-imported cat testfile hello",
"podman stop myubi",
"podman stop 1984555a2c27",
"*podman kill --signal=\"SIGHUP\" 74b1da000a11* 74b1da000a114015886c557deec8bed9dfb80c888097aa83f30ca4074ff55fb2",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA d65aecc325a4 ubi8/ubi /bin/bash 3 secs ago Exited (0) 5 secs ago peaceful_hopper false 74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute musing_brown false",
"podman rm peaceful_hopper",
"podman stop musing_brown podman rm musing_brown",
"podman rm clever_yonath furious_shockley",
"podman rm -a",
"#!/bin/bash if id -nG \"USDUSER\" 2> /dev/null | grep -qw \"USDGROUP\" 2> /dev/null ; then exit 0 else exit 1 fi",
"podman run image",
"external preexec hook /etc/containers/pre-exec-hooks/001-check-groups.sh failed"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_working-with-containers_building-running-and-managing-containers
|
Chapter 14. Enabling SSL/TLS on Overcloud Public Endpoints
|
Chapter 14. Enabling SSL/TLS on Overcloud Public Endpoints By default, the overcloud uses unencrypted endpoints for its services. This means that the overcloud configuration requires an additional environment file to enable SSL/TLS for its Public API endpoints. The following chapter shows how to configure your SSL/TLS certificate and include it as a part of your overcloud creation. Note This process only enables SSL/TLS for Public API endpoints. The Internal and Admin APIs remain unencrypted. This process requires network isolation to define the endpoints for the Public API. 14.1. Initializing the Signing Host The signing host is the host that generates and signs new certificates with a certificate authority. If you have never created SSL certificates on the chosen signing host, you might need to initialize the host so that it can sign new certificates. The /etc/pki/CA/index.txt file contains records of all signed certificates. Check if this file exists. If the file does not exist, create the directory path if needed, then create an empty file, index.txt : The /etc/pki/CA/serial file identifies the serial number to use for the certificate to sign. Check if this file exists. If the file does not exist, create a new file, serial , with a starting value of 1000 : 14.2. Creating a Certificate Authority Normally you sign your SSL/TLS certificates with an external certificate authority. In some situations, you might want to use your own certificate authority. For example, you might want to have an internal-only certificate authority. Generate a key and certificate pair to act as the certificate authority: The openssl req command asks for certain details about your authority. Enter these details at the prompt. These commands create a certificate authority file called ca.crt.pem . 14.3. Adding the Certificate Authority to Clients For any external clients aiming to communicate using SSL/TLS, copy the certificate authority file to each client that requires access to your Red Hat OpenStack Platform environment. After you copy the certificate authority file to each client, run the following command on each client to add the certificate to the certificate authority trust bundle: For example, the undercloud requires a copy of the certificate authority file so that it can communicate with the overcloud endpoints during creation. 14.4. Creating an SSL/TLS Key Run the following commands to generate the SSL/TLS key ( server.key.pem ) that you use at different points to generate your undercloud or overcloud certificates: 14.5. Creating an SSL/TLS Certificate Signing Request This procedure creates a certificate signing request for the overcloud. Copy the default OpenSSL configuration file for customization. Edit the custom openssl.cnf file and set SSL parameters to use for the overcloud. An example of the types of parameters to modify include: Set the commonName_default to one of the following: If using an IP to access over SSL/TLS, use the Virtual IP for the Public API. Set this VIP using the PublicVirtualFixedIPs parameter in an environment file. For more information, see Section 13.4, "Assigning Predictable Virtual IPs" . If you are not using predictable VIPs, the director assigns the first IP address from the range defined in the ExternalAllocationPools parameter. If using a fully qualified domain name to access over SSL/TLS, use the domain name instead. Include the same Public API IP address as an IP entry and a DNS entry in the alt_names section. If also using DNS, include the hostname for the server as DNS entries in the same section. For more information about openssl.cnf , run man openssl.cnf . Run the following command to generate certificate signing request ( server.csr.pem ): Make sure to include the SSL/TLS key you created in Section 14.4, "Creating an SSL/TLS Key" for the -key option. Use the server.csr.pem file to create the SSL/TLS certificate in the section. 14.6. Creating the SSL/TLS Certificate Run the following command to create a certificate for your undercloud or overcloud: This command uses the following options: The configuration file specifying the v3 extensions. Include the configuration file with the -config option. The certificate signing request from Section 14.5, "Creating an SSL/TLS Certificate Signing Request" to generate and sign the certificate with a certificate authority. Include the certificate signing request with the -in option. The certificate authority you created in Section 14.2, "Creating a Certificate Authority" , which signs the certificate. Include the certificate authority with the -cert option. The certificate authority private key you created in Section 14.2, "Creating a Certificate Authority" . Include the private key with the -keyfile option. This command creates a new certificate named server.crt.pem . Use this certificate in conjunction with the SSL/TLS key from Section 14.4, "Creating an SSL/TLS Key" to enable SSL/TLS. 14.7. Enabling SSL/TLS Copy the enable-tls.yaml environment file from the Heat template collection: Edit this file and make the following changes for these parameters: SSLCertificate Copy the contents of the certificate file ( server.crt.pem ) into the SSLCertificate parameter. For example: Important The certificate contents require the same indentation level for all new lines. SSLIntermediateCertificate If you have an intermediate certificate, copy the contents of the intermediate certificate into the SSLIntermediateCertificate parameter: Important The certificate contents require the same indentation level for all new lines. SSLKey Copy the contents of the private key ( server.key.pem ) into the SSLKey parameter. For example: Important The private key contents require the same indentation level for all new lines. 14.8. Injecting a Root Certificate If the certificate signer is not in the default trust store on the overcloud image, you must inject the certificate authority into the overcloud image. Copy the inject-trust-anchor-hiera.yaml environment file from the heat template collection: Edit this file and make the following changes for these parameters: CAMap Lists each certificate authority content (CA) to inject into the overcloud. The overcloud requires both a CA files used to sign the certificates for the undercloud and the overcloud. Copy the contents of the root certificate authority file ( ca.crt.pem ) into an entry. For example, your CAMap parameter might look like the following: Important The certificate authority contents require the same indentation level for all new lines. You can also inject additional CAs with the CAMap parameter. 14.9. Configuring DNS endpoints If using a DNS hostname to access the overcloud through SSL/TLS, you will need to copy the custom-domain.yaml file into /home/stack/templates . You can find this file in /usr/share/tripleo-heat-templates/environments/predictable-placement/ . Configure the host and domain names for all fields, adding parameters for custom networks if needed: Note It is not possible to redeploy with a TLS-everywhere architecture if this environment file is not included in the initial deployment. Add a list of DNS servers to use under parameter defaults, in either a new or existing environment file: 14.10. Adding Environment Files During Overcloud Creation The deployment command ( openstack overcloud deploy ) uses the -e option to add environment files. Add the environment files from this section in the following order: The environment file to enable SSL/TLS ( enable-tls.yaml ) The environment file to set the DNS hostname ( cloudname.yaml ) The environment file to inject the root certificate authority ( inject-trust-anchor-hiera.yaml ) The environment file to set the public endpoint mapping: If using a DNS name for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml If using a IP address for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml For example: 14.11. Updating SSL/TLS Certificates If you need to update certificates in the future: Edit the enable-tls.yaml file and update the SSLCertificate , SSLKey , and SSLIntermediateCertificate parameters. If your certificate authority has changed, edit the inject-trust-anchor.yaml file and update the SSLRootCertificate parameter. Once the new certificate content is in place, rerun your deployment command. For example:
|
[
"mkdir -p /etc/pki/CA sudo touch /etc/pki/CA/index.txt",
"echo '1000' | sudo tee /etc/pki/CA/serial",
"openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem",
"sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust extract",
"openssl genrsa -out server.key.pem 2048",
"cp /etc/pki/tls/openssl.cnf .",
"[req] distinguished_name = req_distinguished_name req_extensions = v3_req [req_distinguished_name] countryName = Country Name (2 letter code) countryName_default = AU stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Queensland localityName = Locality Name (eg, city) localityName_default = Brisbane organizationalUnitName = Organizational Unit Name (eg, section) organizationalUnitName_default = Red Hat commonName = Common Name commonName_default = 10.0.0.1 commonName_max = 64 Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 10.0.0.1 DNS.1 = 10.0.0.1 DNS.2 = myovercloud.example.com",
"openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem",
"sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem",
"cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.",
"parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGS sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQ -----END CERTIFICATE-----",
"parameter_defaults: SSLIntermediateCertificate: | -----BEGIN CERTIFICATE----- sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvpBCwUAMFgxCzAJB MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQE -----END CERTIFICATE-----",
"parameter_defaults: SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4X -----END RSA PRIVATE KEY-----",
"cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.",
"parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCS BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBw UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBA -----END CERTIFICATE----- overcloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJAIc75A7FD++DMA0GCS BAMMD3d3dy5leGFtcGxlLmNvbTAeFw0xOTAxMz Um54yGCARyp3LpkxvyfMXX1DokpS1uKi7s6CkF -----END CERTIFICATE-----",
"title: Custom Domain Name description: | This environment contains the parameters that need to be set in order to use a custom domain name and have all of the various FQDNs reflect it. parameter_defaults: # The DNS domain used for the hosts. This must match the overcloud_domain_name configured on the undercloud. # Type: string CloudDomain: localdomain # The DNS name of this cloud. E.g. ci-overcloud.tripleo.org # Type: string CloudName: overcloud.localdomain # The DNS name of this cloud's provisioning network endpoint. E.g. 'ci-overcloud.ctlplane.tripleo.org'. # Type: string CloudNameCtlplane: overcloud.ctlplane.localdomain # The DNS name of this cloud's internal_api endpoint. E.g. 'ci-overcloud.internalapi.tripleo.org'. # Type: string CloudNameInternal: overcloud.internalapi.localdomain # The DNS name of this cloud's storage endpoint. E.g. 'ci-overcloud.storage.tripleo.org'. # Type: string CloudNameStorage: overcloud.storage.localdomain # The DNS name of this cloud's storage_mgmt endpoint. E.g. 'ci-overcloud.storagemgmt.tripleo.org'. # Type: string CloudNameStorageManagement: overcloud.storagemgmt.localdomain",
"parameter_defaults: DnsServers: [\"10.0.0.254\"] .",
"openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml",
"openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-Enabling_SSLTLS_on_the_Overcloud
|
Authentication and authorization
|
Authentication and authorization OpenShift Container Platform 4.15 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team
|
[
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1",
"oc apply -f </path/to/file.yaml>",
"oc describe oauth.config.openshift.io/cluster",
"Spec: Token Config: Access Token Max Age Seconds: 172800",
"oc edit oauth cluster",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1",
"oc get clusteroperators authentication",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 145m",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.15.0 True False False 145m",
"error: You must be logged in to the server (Unauthorized)",
"oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }",
"oc get events | grep ServiceAccount",
"1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"oc describe sa/proxy | grep -A5 Events",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens",
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc edit oauthclient <oauth_client> 1",
"apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1",
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted",
"oc delete secrets kubeadmin -n kube-system",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc create user <username>",
"oc create identity <identity_provider>:<identity_provider_user_id>",
"oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>",
"htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>",
"htpasswd -c -B -b users.htpasswd <username> <password>",
"Adding password for user user1",
"htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>",
"> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>",
"> htpasswd.exe -c -B -b users.htpasswd <username> <password>",
"Adding password for user user1",
"> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>",
"oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1",
"apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd",
"htpasswd -bB users.htpasswd <username> <password>",
"Adding password for user <username>",
"htpasswd -D users.htpasswd <username>",
"Deleting password for user <username>",
"oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -",
"apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>",
"oc delete user <username>",
"user.user.openshift.io \"<username>\" deleted",
"oc delete identity my_htpasswd_provider:<username>",
"identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted",
"oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"ldap://host:port/basedn?attribute?scope?filter",
"(&(<filter>)(<attribute>=<username>))",
"ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)",
"oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1",
"apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"{\"error\":\"Error message\"}",
"{\"sub\":\"userid\"} 1",
"{\"sub\":\"userid\", \"name\": \"User Name\", ...}",
"{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}",
"{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}",
"oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>",
"#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0",
"#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0",
"curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp",
"{\"sub\":\"userid\"}",
"{\"sub\":\"userid\", \"name\": \"User Name\", ...}",
"{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}",
"{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User",
"identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User",
"curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request",
"curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request",
"curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token",
"curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>",
"kdestroy -c cache_name 1",
"oc login -u <username>",
"oc logout",
"kinit",
"oc login",
"https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>",
"https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc login -u <identity_provider_username> --server=<api_server_url_and_port>",
"oc whoami",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers",
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>",
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6",
"oc create -f pod-projected-svc-token.yaml",
"oc create token build-robot",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ",
"runAsUser: type: MustRunAs uid: <id>",
"runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>",
"runAsUser: type: MustRunAsNonRoot",
"runAsUser: type: RunAsAny",
"allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group",
"requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT",
"oc create -f scc-admin.yaml",
"securitycontextconstraints \"scc-admin\" created",
"oc get scc scc-admin",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]",
"apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1",
"oc create -f deployment.yaml",
"oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1",
"my-scc",
"oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use",
"oc get scc",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc describe scc restricted",
"Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>",
"oc edit scc <scc_name>",
"oc delete scc <scc_name>",
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false",
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true",
"oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c",
"1 test-namespace my-pod",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>",
"url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5",
"baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6",
"groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 bindDN: cn=admin,dc=example,dc=com bindPassword: file: \"/etc/secrets/bindPassword\" rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4",
"oc adm groups sync --sync-config=config.yaml --confirm",
"oc adm groups sync --type=openshift --sync-config=config.yaml --confirm",
"oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm",
"oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm",
"oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc new-project ldap-sync 1",
"kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync",
"oc create -f ldap-sync-service-account.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - user.openshift.io resources: - groups verbs: - get - list - create - update",
"oc create -f ldap-sync-cluster-role.yaml",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2",
"oc create -f ldap-sync-cluster-role-binding.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"oc create -f ldap-sync-config-map.yaml",
"kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"",
"oc create -f ldap-sync-cron-job.yaml",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com",
"oc adm groups sync --sync-config=rfc2307_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]",
"Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".",
"Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3",
"oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins",
"oc adm groups sync --sync-config=active_directory_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com",
"oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5",
"oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>",
"cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r",
"mycluster-2mpcn",
"azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3",
"{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }",
"{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/authentication_and_authorization/index
|
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1]
|
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 13.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 13.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 13.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 13.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 13.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 13.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 13.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents Table 13.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 13.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 13.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 13.9. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 13.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshotContent Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.12. Body parameters Parameter Type Description body DeleteOptions schema Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 13.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.17. Body parameters Parameter Type Description body Patch schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 13.22. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 13.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeSnapshotContent Table 13.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 13.25. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 13.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.27. Body parameters Parameter Type Description body Patch schema Table 13.28. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 13.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.30. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.31. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1
|
Chapter 2. Editing a routing context in the route editor
|
Chapter 2. Editing a routing context in the route editor 2.1. Adding patterns to a route Routes consist of a sequence of connected patterns, referred to as nodes once they are placed on the canvas inside a Route container node. A complete route typically consists of a starting endpoint, a string of processing nodes, and one or more destination endpoints. When you add a pattern into a Route container on the canvas, the pattern takes on a color that indicates its type of node: Blue - Route containers, which correspond to route elements in the context file, and other container nodes, such as when and otherwise EIPs that contain other EIPs that complete their logic Green - Consumer endpoints that input data entering routes Orange - EIPs that route, transform, process, or control the flow of data transiting routes Purple - Producer endpoints that output the data exiting routes Procedure To add a pattern to a route: In the Palette , locate the pattern that you want to add to the route. Use one of the following methods: Click the pattern in the Palette and then, in the canvas, click the route container. Drag the pattern over the target Route container and drop it. Alternatively, you can add a pattern on an existing node that has no outgoing connection, or on a connection existing between two nodes, to have the tooling automatically wire the connections between all nodes involved. The tooling checks whether the resulting connection is valid and then either allows or prevents you from adding the pattern on the target. For valid connections, the tooling behaves differently depending on whether the target is a node or a connection: For an existing node , the tooling adds the new node to the target node's outgoing side (beneath or to the right of it depending on how the editor preferences are set ) and automatically wires the connection between them For an existing connection , the tooling inserts the new node between the two connected nodes and automatically rewires the connections between the three nodes Optionally, you can manually connect two nodes: In the Route container on the canvas, select the source node to display its connector arrow. Drag the source node's connector arrow ( ) to the target node, and release the mouse button to drop the connector on it. Note Not all nodes can be connected. When you try to connect a source node to an invalid target node, the tooling displays the symbol attached to the mouse cursor, and the connector fails to stick to the target node. After you add a pattern inside a Route container, you can drag it to different location inside the route container or to another route container on the canvas, as long as it can establish a valid connection. You can also relocate existing nodes that are already connected, as long as the move can establish another valid connection. To view a short video that illustrates how to reposition endpoints, click here . Select File Save . The tooling saves routes in the context file regardless of whether they are complete. The new pattern appears on the canvas in the Route container and becomes the selected node. The Properties view displays a list of the new node's properties that you can edit. Changing the layout direction When you connect one node to another, the tooling updates the layout according to the route editor's layout preference. The default is Down . To access the route editor 's layout preference: On Linux and Windows machines, select Windows Preferences Fuse Tooling Editor Choose the layout direction for the diagram editor . Related topics Section 2.2, "Configuring a pattern" Section 2.3, "Removing patterns from a route" 2.2. Configuring a pattern Overview Most patterns require some explicit configuration. For example, an endpoint requires an explicitly entered URI . The tooling's Properties view provides a form that lists all of the configuration details a particular pattern supports. The Properties view also provides the following convenience features: validating that all required properties have values validating that supplied values are the correct data type for the property drop-down lists for properties that have a fixed set of values drop-down lists that are populated with the available bean references from the Apache Camel Spring configuration Procedure To configure a pattern: On the canvas, select the node you want to configure. The Properties view lists all of the selected node's properties for you to edit. For EIPs, the Details tab lists all of a pattern's properties. For components from the Components drawer, the Details tab lists the general properties and those that require a value, and the Advanced tab lists additional properties grouped according to function. The Documentation tab describes the pattern and each of its properties. Edit the fields in the Properties view to configure the node. When done, save your work by selecting File Save from the menu bar. 2.3. Removing patterns from a route Overview As you develop and update a route, you may need to remove one or more of the route's nodes. The node's icon makes this easy to do. When you delete a node from the canvas, all of its connections with other nodes in the route are also deleted, and the node is removed from the corresponding route element in the context file. Note You can also remove a node by opening its context menu and selecting Remove . Procedure To remove a node from a route: Select the node you want to delete. Click its icon. Click Yes when asked if you are sure you want to delete this element. The node and all of its connections are deleted from the canvas, and the node is removed from its corresponding route element in the context file. Related topics Section 2.1, "Adding patterns to a route" 2.4. Adding routes to the routing context Overview The camelContext element within an XML context file creates a routing context. The camelContext element can contain one or more routes, and each route, displayed on the canvas as a Route container node, maps to a route element in the generated camelContext element. Procedure To add another route to the camelContext: In the Design tab, do one of the following: Click a Route pattern in the Palette 's Routing drawer and then click in the canvas where you want to place the route. Drag a Route pattern from the Palette 's Routing drawer and drop it onto the canvas. The Properties view displays a list of the new route's properties that you can edit. In the Properties view, enter: An ID (for example, Route2 ) for the new route in the route's Id field Note The tooling automatically assigns an ID to EIP and component patterns dropped on the canvas. You can replace these autogenerated IDs with your own to distinguish the routes in your project. A description of the route in the Description field Values for any other properties, as needed. Required properties are indicated by an asterisk (*). On the menu bar, select File Save to save the changes you made to the routing context file. To switch between multiple routes, select the route that you want to display on the canvas by clicking its entry under the project's Camel Contexts folder in the Project Explorer view. To display all routes in the context, as space allows, click the context file entry in the Project Explorer view. To view the code generated by the tooling when you add a route to the canvas, click the Source tab. Note You can alternately add a route in the Source tab, by adding a <route/> element to the existing list within the camelContext element. 2.5. Deleting a route Overview In some cases you made need to delete an entire route from your routing context. The Route container's icon makes this easy to do. When you delete a route, all of the nodes inside the Route container are also deleted, and the corresponding route element in the context file is removed. Note You can also remove a route using the Route container's context menu and selecting Remove . Important You cannot undo this operation. Procedure To delete a route: If the routing context contains more than one route, first select the route you want to delete in the Project Explorer view. On the canvas, click the Route container's icon. Click Yes when asked if you are sure you want to delete this element. The route is removed from the canvas, from the context file, and from the Project Explorer view. 2.6. Adding global endpoints, data formats, or beans Overview Some routes rely on shared configuration provided by global endpoints, global data formats, or global beans. You can add global elements to the project's routing context file by using the route editor's Configurations tab. To add global elements to your routing context file: Open your routing context file in the route editor. At the bottom of the route editor, click the Configurations tab to display global configurations, if there are any. Click Add to open the Create a new global element dialog. The options are: Endpoint - see the section called "Adding a global endpoint" . Data Format - see the section called "Adding a global data format" . Bean - see the section called "Adding a global bean" . Adding a global endpoint In the Create a new global element dialog, select Endpoint and click OK to open the Select component dialog. Note By default, the Select component dialog opens with the Show only palette components option enabled. To see all available components, uncheck this option. Note The Grouped by categories option groups components by type. In the Select component dialog, scroll through the list of Camel components to find and select the component you want to add to the context file, and then enter an ID for it in the Id field. In this example, the JMS component is selected and myJMS is the Id value. Click Finish . You can now set properties in the Properties view as needed. The tooling autofills Id with the value you entered in the component's Id field in [globalEndptSelect] . In this example, Camel builds the uri (required field) starting with the component's schema (in this case, jms: ), but you must specify the destinationName and the destinationType to complete the component's uri . Note For the JMS component, the destination type defaults to queue . This default value does not appear in the uri field on the Details page until you have entered a value in Destination Name (required field). To complete the component's uri, click Advanced . In the Destination Name field, enter the name of the destination endpoint (for example, FOO.BAR ). In the Destination Type field, enter the endpoint destination's type (for example, queue , topic , temp:queue , or temp:topic ). The Properties view's Details and Advanced tabs provide access to all properties available for configuring a particular component. Click the Consumer (advanced) tab. Enable the properties Eager Loading Of Properties and Expose Listener Session . In the route editor, switch to the Source tab to see the code that the tooling added to the context file (in this example, a configured JMS endpoint), before the first route element. When done, save your changes by selecting File Save on the menu bar. Adding a global data format In the Create a new global element dialog, select Data Format and click OK to open the Create a global Data Format dialog. The data format defaults to avro , the format at the top of the list of those available. Open the Data Format drop-down menu, and select the format you want, for example, xmljson . In the Id field, enter a name for the format, for example, myDataFormat ). Click Finish . In the Properties view, set property values as appropriate for your project, for example: In the route editor, click the Source tab to see the code that the tooling added to the context file. In this example, a configured xmljson data format is before the first route element. When done, save your changes by selecting File Save on the menu bar. Adding a global bean A global bean enables out-of-route bean definitions that can be referenced from anywhere in the route. When you copy a Bean component from the palette to the route, you can find defined global beans in the Properties view's Ref dropdown. Select the global bean that you want the Bean component to reference. To add a global bean element: In the Create a new global element window, select Bean and click OK to open the Bean Definition dialog. In the Id field, enter an ID for the global bean, for example, TransformBean . The ID must be unique in the configuration. Identify a bean class or a factory bean. To specify a factory bean, you must have already added another global bean with a factory class specified. You can then select that global bean to declare it as a global bean factory. One instance of the bean factory class will be in the runtime. Other global beans can call factory methods on that class to create their own instances of other classes. To fill the Class field, do one of the following: Enter the name of a class that is in the project or in a referenced project. Click ... to navigate to and select a class that is in the project or in a referenced project. Click + to define a new bean class and add it as a global bean. If the bean you are adding requires one or more arguments, in the Constructor Arguments section, for each argument: Click Add . Optionally, in the Type field, enter the type of the argument. The default is java.lang.String . In the Value field, enter the value of the argument. Click OK . Optionally specify one or more properties that are accessible to the global bean. In the Bean Properties section, do the following for each property: Click Add . In the Name field, enter the name of the property. In the Value field, enter the value of the property. Click OK . Click Finish to add the global bean to the configuration. The global bean ID you specified appears in the Configurations tab, for example: Switch to the Source tab to see the bean element that the tooling added to the context file. For example: Click the Configurations tab to return to the list of global elements and select a global bean to display its standard properties in the Properties view, for example: Note To view or edit a property that you specified when you added a global bean, select the bean in the Configurations tab and then click Edit . Set global bean properties as needed: Depends-on is a string that you can use to identify a bean that must be created before this global bean. Specify the ID (name) of the depended on bean. For example, if you are adding the TransformBean and you set Depends-on to ChangeCaseBean then ChangeCaseBean must be created and then TransformBean can be created. When the beans are being destroyed then TransformBean is destroyed first. Factory-method is useful only when the global bean is a factory class. In this situation, specify or select a static factory method to be called when the bean is referenced. Scope is singleton or prototype . The default, singleton , indicates that Camel uses the same instance of the bean each time the bean is called. Specify prototype when you want Camel to create a new instance of the bean each time the bean is called. Init-method lets you specify or select which of the bean's init() methods to call when the bean is referenced. Destroy-method lets you specify or select which of the bean's destory methods to call when the processing performed by the bean is done. When done, save your changes by selecting File Save on the menu bar. Deleting a global element The procedure is the same whether removing an endpoint, data format or bean that was previously added to the routing context. Note You cannot perform an undo operation for deletion of a global element. If you inadvertently delete a global element that you want to keep in the configuration you might be able to undo the deletion by closing the context file without saving it. If this is not feasible then re-add the inadvertently deleted global element. In the Configurations tab, select the global element that you want to delete. For example, suppose you want to delete the data format myDataFormat that was added in the section called "Adding a global data format" : Click Delete . The global element myDataFormat disappears from the Configurations tab. Switch to the Source tab to check that the tooling removed the XML code from the routing context. When done, save your changes by selecting File Save on the menu bar. Editing a global element The procedure is the same whether modifying the properties of an endpoint, data format or bean that you added to the routing context. Typically, you do not want to change the ID of a global element. If the global element is already in use in a running route, changing the ID can break references to the global element. In the Configurations tab, select the global element that you want to edit. For example, to edit the endpoint myJMS that was added in the section called "Adding a global endpoint" , select it: Click Edit . In the Properties view, modify the element's properties as needed. For example, open the Advanced Consumer tab, and change the value of Concurrent Consumers to 2 : In the route editor, click the Source tab and check that the tooling added the property concurrentConsumers=2 to the routing context: When done, save your changes by selecting File Save on the menu bar. 2.7. Configuring the route editor Overview Using Fuse preference settings, you can specify options for the route editor's behavior and user interface: The default language to use for expressions in Enterprise Integration Patterns (EIPs) The direction (to the right or down) in which patterns flow on the Design canvas when you create routes Whether the Design canvas displays a grid overlay in the background of the canvas. The method for labeling nodes on the Design canvas Procedure To configure the route editor: Open the Editor preferences window: On Linux and Windows machines, select Windows Preferences Fuse Tooling Editor . To select the default language that you want to use for expressions in Enterprise Integration Pattern (EIP) components, select a language from the drop-down list. The default is Simple . To specify the direction in which you want the route editor to align the patterns in a route, select Down or Right . The default is Down . To enable or disable displaying a grid overlay on the background of the canvas, check the box to Show diagram grid in Routes Editor . The default is enabled . To enable or disable using component IDs as labels in the route editor's Design tab, check the box to Use ID values for component labels . The default is disabled . If you check this option and also specify a preferred label for a component (see Step 6), then the preferred label is used for that component instead of the ID value. To use a parameter as the label for a component (except for endpoints, such as File nodes) in the route editor's Design tab: In the Preferred labels section, click Add . The New Preferred Label dialog opens. Select a Component and then select the Parameter to use as the label for the component. Click OK . The component and parameter pairs are listed in the Editor Preferences window. You can optionally Edit and Remove component labels. Note If you check the Use ID values for component labels option, it applies to all components except for the components listed in the Preferred labels section. Click Apply and Close to apply the changes to the Editor preferences and close the Preferences window. Note You can restore the route editor's original defaults at any time by returning to the Editor preferences dialog and clicking Restore Defaults .
|
[
"The following sections describe how to edit a routing context."
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/ridereditroute
|
Chapter 2. Installing Satellite Server
|
Chapter 2. Installing Satellite Server When the intended host for Satellite Server is in a disconnected environment, you can install Satellite Server by using an external computer to download an ISO image of the packages, and copying the packages to the system you want to install Satellite Server on. This method is not recommended for any other situation as ISO images might not contain the latest updates, bug fixes, and functionality. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. Before you continue, consider which manifests are relevant for your environment. For more information on manifests, see Managing Red Hat Subscriptions in Managing content . Note You cannot register Satellite Server to itself. 2.1. Downloading the binary DVD images Use this procedure to download the ISO images for Red Hat Enterprise Linux and Red Hat Satellite. Procedure Go to Red Hat Customer Portal and log in. Click DOWNLOADS . Select Red Hat Enterprise Linux . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Enterprise Linux for x86_64 . Version is set to the latest minor version of the product you plan to use as the base operating system. Architecture is set to the 64 bit version. On the Product Software tab, download the Binary DVD image for the latest Red Hat Enterprise Linux for x86_64 version. Click DOWNLOADS and select Red Hat Satellite . Ensure that you have the correct product and version for your environment. Product Variant is set to Red Hat Satellite . Version is set to the latest minor version of the product you plan to use. On the Product Software tab, download the Binary DVD image for the latest Red Hat Satellite version. Copy the ISO files to /var/tmp on the Satellite base operating system or other accessible storage device. 2.2. Configuring the base operating system with offline repositories in RHEL 8 Use this procedure to configure offline repositories for the Red Hat Enterprise Linux 8 and Red Hat Satellite ISO images. Procedure Create a directory to serve as the mount point for the ISO file corresponding to the version of the base operating system. Mount the ISO image for Red Hat Enterprise Linux to the mount point. To copy the ISO file's repository data file and change permissions, enter: Edit the repository data file and add the baseurl directive. Verify that the repository has been configured. Create a directory to serve as the mount point for the ISO file of Satellite Server. Mount the ISO image for Satellite Server to the mount point. 2.3. Optional: Using fapolicyd on Satellite Server By enabling fapolicyd on your Satellite Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts. You can turn on or off the fapolicyd on your Satellite Server or Capsule Server at any point. 2.3.1. Installing fapolicyd on Satellite Server You can install fapolicyd along with Satellite Server or can be installed on an existing Satellite Server. If you are installing fapolicyd along with the new Satellite Server, the installation process will detect the fapolicyd in your Red Hat Enterprise Linux host and deploy the Satellite Server rules automatically. Prerequisites Ensure your host has access to the BaseOS repositories of Red Hat Enterprise Linux. Procedure For a new installation, install fapolicyd: For an existing installation, install fapolicyd using satellite-maintain packages install: Start the fapolicyd service: Verification Verify that the fapolicyd service is running correctly: New Satellite Server or Capsule Server installations In case of new Satellite Server or Capsule Server installation, follow the standard installation procedures after installing and enabling fapolicyd on your Red Hat Enterprise Linux host. Additional resources For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 Security hardening . 2.4. Installing the Satellite packages from the offline repositories Use this procedure to install the Satellite packages from the offline repositories. Procedure Ensure the ISO images for Red Hat Enterprise Linux Server and Red Hat Satellite are mounted: Import the Red Hat GPG keys: Ensure the base operating system is up to date with the Binary DVD image: Change to the directory where the Satellite ISO is mounted: Run the installation script in the mounted directory: Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.5. Resolving package dependency errors If there are package dependency errors during installation of Satellite Server packages, you can resolve the errors by downloading and installing packages from Red Hat Customer Portal. For more information about resolving dependency errors, see the KCS solution How can I use the yum output to solve yum dependency errors? . If you have successfully installed the Satellite packages, skip this procedure. Procedure Go to the Red Hat Customer Portal and log in. Click DOWNLOADS . Click the Product that contains the package that you want to download. Ensure that you have the correct Product Variant , Version , and Architecture for your environment. Click the Packages tab. In the Search field, enter the name of the package. Click the package. From the Version list, select the version of the package. At the bottom of the page, click Download Now . Copy the package to the Satellite base operating system. On Satellite Server, change to the directory where the package is located: Install the package locally: Change to the directory where the Satellite ISO is mounted: Verify that you have resolved the package dependency errors by installing Satellite Server packages. If there are further package dependency errors, repeat this procedure. Note The script contains a command that enables the satellite:el8 module. Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . If you have successfully installed the Satellite packages, the following message is displayed: Install is complete. Please run satellite-installer --scenario satellite . 2.6. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. Choose from one of the following methods: Section 2.6.1, "Configuring Satellite installation" . This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. 2.6.1. Configuring Satellite installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the most commonly used options and any default values. Use the satellite-installer --scenario satellite --full-help command to display advanced options. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. By default, all configuration files configured by the installer are managed. When satellite-installer runs, it overwrites any manual changes to the managed files with the intended values. This means that running the installer on a broken system should restore it to working order, regardless of changes made. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . Unmount the ISO images: 2.7. Disabling subscription connection Disable subscription connection on disconnected Satellite Server to avoid connecting to the Red Hat Portal. This will also prevent you from refreshing the manifest and updating upstream entitlements. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Set the Subscription Connection Enabled value to No . CLI procedure Enter the following command on Satellite Server: 2.8. Importing a Red Hat subscription manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Note Simple Content Access (SCA) is set on the organization, not the manifest. Importing a manifest does not change your organization's Simple Content Access status. Prerequisites Ensure you have a Red Hat subscription manifest exported from the Red Hat Customer Portal. For more information, see Using manifests for a disconnected Satellite Server in Subscription Central . Ensure that you disable subscription connection on your Satellite Server. For more information, see Section 2.7, "Disabling subscription connection" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Choose File . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . CLI procedure Copy the Red Hat subscription manifest file from your local machine to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in Managing content .
|
[
"scp localfile username@hostname:remotefile",
"mkdir /media/rhel8",
"mount -o loop rhel8-DVD .iso /media/rhel8",
"cp /media/rhel8/media.repo /etc/yum.repos.d/rhel8.repo chmod u+w /etc/yum.repos.d/rhel8.repo",
"[RHEL8-BaseOS] name=Red Hat Enterprise Linux BaseOS mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/BaseOS/ [RHEL8-AppStream] name=Red Hat Enterprise Linux Appstream mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/AppStream/",
"yum repolist",
"mkdir /media/sat6",
"mount -o loop sat6-DVD .iso /media/sat6",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"findmnt -t iso9660",
"rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"dnf upgrade",
"cd /media/sat6/",
"./install_packages",
"cd /path-to-package/",
"dnf install package_name",
"cd /media/sat6/",
"./install_packages",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"umount /media/sat6 umount /media/rhel8",
"hammer settings set --name subscription_connection_enabled --value false",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \""
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/Installing_Server_Disconnected_satellite
|
Chapter 14. Subscription consumption
|
Chapter 14. Subscription consumption The Ansible Automation Platform metrics utility tool ( metrics-utility ) is a command-line utility that is installed on a system containing an instance of automation controller. When installed and configured, metrics-utility gathers billing-related metrics from your system and creates a consumption-based billing report. Metrics-utility is especially suited for users who have multiple managed hosts and want to use consumption-based billing. Once a report is generated, it is deposited in a target location that you specify in the configuration file. Metrics-utility collects two types of data from your system: configuration data and reporting data. The configuration data includes the following information: Version information for automation controller and for the operating system Subscription information The base URL The reporting data includes the following information: Job name and ID Host name Inventory name Organization name Project name Success or failure information Report date and time To ensure that metrics-utility continues to work as configured, clear your report directories of outdated reports regularly. 14.1. Configuring metrics-utility 14.1.1. On Red Hat Enterprise Linux Prerequisites: An active Ansible Automation Platform subscription Metrics-utility is included with Ansible Automation Platform, so you do not need a separate installation. The following commands gather the relevant data and generate a CCSP report containing your usage metrics. You can configure these commands as cronjobs to ensure they run at the beginning of every month. See How to schedule jobs using the Linux 'cron' utility for more on configuring using the cron syntax. Procedure In the cron file, set the following variables to ensure metrics-utility gathers the relevant data. To open the cron file for editing, run: crontab -e Specify the following variables to indicate where the report is deposited in your file system: export METRICS_UTILITY_SHIP_TARGET=directory export METRICS_UTILITY_SHIP_PATH=/awx_devel/awx-dev/metrics-utility/shipped_data/billing Set these variables to generate a report: export METRICS_UTILITY_REPORT_TYPE=CCSP export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD export METRICS_UTILITY_REPORT_SKU=MCT3752MO export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" export METRICS_UTILITY_REPORT_H1_HEADING="CCSP Reporting <Company>: ANSIBLE Consumption" export METRICS_UTILITY_REPORT_COMPANY_NAME="Company Name" export METRICS_UTILITY_REPORT_EMAIL="[email protected]" export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" export METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER="BUSINESS LEADER" export METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER="PROCUREMENT LEADER" Add the following parameter to gather and store the data in the provided SHIP_PATH directory in the ./report_data subdirectory: metrics-utility gather_automation_controller_billing_data --ship --until=10m To configure the run schedule, add the following parameters to the end of the file and specify how often you want metrics-utility to gather information and build a report using cron syntax . In the following example, the gather command is configured to run every hour at 00 minutes. The build_report command is configured to run every second day of each month at 4:00 AM. 0 */1 * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m 0 4 2 * * metrics-utility build_report Save and close the file. To verify that you saved your changes, run: crontab -l You can also check the logs to ensure that data is being collected. Run: cat /var/log/cron The following is an example of the output. Note that time and date might vary depending on how your configure the run schedule: May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDOUT (No billing data for month: 2024-04) May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDEND (metrics-utility build_report) May 8 09:45:19 ip-10-0-6-23 crontab[51619]: (root) END EDIT (root) May 8 09:45:34 ip-10-0-6-23 crontab[51659]: (root) BEGIN EDIT (root) May 8 09:46:01 ip-10-0-6-23 CROND[51688]: (root) CMD (metrics-utility gather_automation_controller_billing_data --ship --until=10m) May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDOUT (/tmp/9e3f86ee-c92e-4b05-8217-72c496e6ffd9-2024-05-08-093402+0000-2024-05-08-093602+0000-0.tar.gz) May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDEND (metrics-utility gather_automation_controller_billing_data --ship --until=10m) May 8 09:46:26 ip-10-0-6-23 crontab[51659]: (root) END EDIT (root) Run the following command to build a report for the month: metrics-utility build_report The generated report will have the default name CCSP-<YEAR>-<MONTH>.xlsx and will be deposited in the ship path that you specified in step 2. 14.1.2. On OpenShift Container Platform from the Ansible Automation Platform operator Metrics-utility is included in the OpenShift Container Platform image beginning with version 4.12. If your system does not have metrics-utility installed, update your OpenShift image to the latest version. Follow the steps below to configure the run schedule for metrics-utility on OpenShift Container Platform using the Ansible Automation Platform operator. Prerequisites: A running OpenShift cluster An operator-based installation of Ansible Automation Platform on OpenShift Container Platform. Note Metrics-utility will run as indicated by the parameters you set in the configuration file. The utility cannot be run manually on OpenShift Container Platform. 14.1.2.1. Create a ConfigMap in the OpenShift UI YAML view From the navigation panel on the left side, select ConfigMaps , and then click the Create ConfigMap button. On the screen, select the YAML view tab. In the YAML field, enter the following parameters with the appropriate variables set: apiVersion: v1 kind: ConfigMap metadata: name: automationcontroller-metrics-utility-configmap data: METRICS_UTILITY_SHIP_TARGET: directory METRICS_UTILITY_SHIP_PATH: /metrics-utility METRICS_UTILITY_REPORT_TYPE: CCSP METRICS_UTILITY_PRICE_PER_NODE: '11' # in USD METRICS_UTILITY_REPORT_SKU: MCT3752MO METRICS_UTILITY_REPORT_SKU_DESCRIPTION: "EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" METRICS_UTILITY_REPORT_H1_HEADING: "CCSP Reporting <Company>: ANSIBLE Consumption" METRICS_UTILITY_REPORT_COMPANY_NAME: "Company Name" METRICS_UTILITY_REPORT_EMAIL: "[email protected]" METRICS_UTILITY_REPORT_RHN_LOGIN: "test_login" METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER: "BUSINESS LEADER" METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER: "PROCUREMENT LEADER" Click Create . To verify that the ConfigMap was created and the metric utility is installed, select ConfigMap from the navigation panel and look for your ConfigMap in the list. 14.1.2.2. Deploy automation controller Deploy automation controller and specify variables for how often metrics-utility gathers usage information and generates a report. From the navigation panel, select Installed Operators . Select Ansible Automation Platform. In the Operator details, select the automation controller tab. Click Create automation controller *. Select the YAML view option. The YAML now shows the default parameters for automation controller. The relevant parameters for metrics-utility are the following: [cols="50%,50%",options="header"] |==== | *Parameter* | *Variable* | *`metrics_utility_enabled`* | True. | *`metrics_utility_cronjob_gather_schedule`* | @hourly or @daily. | *`metrics_utility_cronjob_report_schedule`* | @daily or @monthly. |==== Find the metrics_utility_enabled parameter and change the variable to true . Find the metrics_utility_cronjob_gather_schedule parameter and enter a variable for how often the utility should gather usage information (for example, @hourly or @daily). Find the metrics_utility_cronjob_report_schedule parameter and enter a variable for how often the utility generates a report (for example, @daily or @monthly). Click Create . 14.2. Fetching a monthly report 14.2.1. On RHEL To fetch a monthly report on RHEL, run: scp -r username@controller_host:USDMETRICS_UTILITY_SHIP_PATH/data/<YYYY>/<MM>/ /local/directory/ The generated report will have the default name CCSP-<YEAR>-<MONTH>.xlsx and will be deposited in the ship path that you specified. 14.2.2. On OpenShift Container Platform from the Ansible Automation Platform operator Use the following playbook to fetch a monthly consumption report for Ansible Automation Platform on OpenShift Container Platform: - name: Copy directory from Kubernetes PVC to local machine hosts: localhost vars: report_dir_path: "/mnt/metrics/reports/{{ year }}/{{ month }}/" tasks: - name: Create a temporary pod to access PVC data kubernetes.core.k8s: definition: apiVersion: v1 kind: Pod metadata: name: temp-pod namespace: "{{ namespace_name }}" spec: containers: - name: busybox image: busybox command: ["/bin/sh"] args: ["-c", "sleep 3600"] # Keeps the container alive for 1 hour volumeMounts: - name: "{{ pvc }}" mountPath: "/mnt/metrics" volumes: - name: "{{ pvc }}" persistentVolumeClaim: claimName: automationcontroller-metrics-utility restartPolicy: Never register: pod_creation - name: Wait for both initContainer and main container to be ready kubernetes.core.k8s_info: kind: Pod namespace: "{{ namespace_name }}" name: temp-pod register: pod_status until: > pod_status.resources[0].status.containerStatuses[0].ready retries: 30 delay: 10 - name: Create a tarball of the directory of the report in the container kubernetes.core.k8s_exec: namespace: "{{ namespace_name }}" pod: temp-pod container: busybox command: tar czf /tmp/metrics.tar.gz -C "{{ report_dir_path }}" . register: tarball_creation - name: Copy the report tarball from the container to the local machine kubernetes.core.k8s_cp: namespace: "{{ namespace_name }}" pod: temp-pod container: busybox state: from_pod remote_path: /tmp/metrics.tar.gz local_path: "{{ local_dir }}/metrics.tar.gz" when: tarball_creation is succeeded - name: Ensure the local directory exists ansible.builtin.file: path: "{{ local_dir }}" state: directory - name: Extract the report tarball on the local machine ansible.builtin.unarchive: src: "{{ local_dir }}/metrics.tar.gz" dest: "{{ local_dir }}" remote_src: yes extra_opts: "--strip-components=1" when: tarball_creation is succeeded - name: Delete the temporary pod kubernetes.core.k8s: api_version: v1 kind: Pod namespace: "{{ namespace_name }}" name: temp-pod state: absent 14.3. Modifying the run schedule You can configure metrics-utility to run at specified times and intervals. Run frequency is expressed in cronjobs. See How to schedule jobs using the Linux 'Cron' utility for more information. 14.3.1. On RHEL Procedure From the command line, run: crontab -e After the code editor has opened, update the gather and build parameters using cron syntax as shown below: */2 * * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m */5 * * * * metrics-utili ty build_report Save and close the file. 14.3.2. On OpenShift Container Platform from the Ansible Automation Platform operator Procedure From the navigation panel, select Workloads Deployments . On the screen, select automation-controller-operator-controller-manager . Beneath the heading Deployment Details , click the down arrow button to change the number of pods to zero. This will pause the deployment so you can update the running schedule. From the navigation panel, select Installed Operators . From the list of installed operators, select Ansible Automation Platform. On the screen, select the automation controller tab. From the list that appears, select your automation controller instance. On the screen, select the YAML tab. In the YAML file, find the following parameters and enter a variable representing how often metrics-utility should gather data and how often it should produce a report: metrics_utility_cronjob_gather_schedule: metrics_utility_cronjob_report_schedule: Click Save . From the navigation menu, select Deployments and then select automation-controller-operator-controller-manager . Increase the number of pods to 1. To verify that you have changed the metrics-utility running schedule successfully, you can take one or both of the following steps: return to the YAML file and ensure that the parameters described above reflect the correct variables. From the navigation menu, select Workloads Cronjobs and ensure that your cronjobs show the updated schedule.
|
[
"crontab -e",
"export METRICS_UTILITY_SHIP_TARGET=directory export METRICS_UTILITY_SHIP_PATH=/awx_devel/awx-dev/metrics-utility/shipped_data/billing",
"export METRICS_UTILITY_REPORT_TYPE=CCSP export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD export METRICS_UTILITY_REPORT_SKU=MCT3752MO export METRICS_UTILITY_REPORT_SKU_DESCRIPTION=\"EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)\" export METRICS_UTILITY_REPORT_H1_HEADING=\"CCSP Reporting <Company>: ANSIBLE Consumption\" export METRICS_UTILITY_REPORT_COMPANY_NAME=\"Company Name\" export METRICS_UTILITY_REPORT_EMAIL=\"[email protected]\" export METRICS_UTILITY_REPORT_RHN_LOGIN=\"test_login\" export METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER=\"BUSINESS LEADER\" export METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER=\"PROCUREMENT LEADER\"",
"metrics-utility gather_automation_controller_billing_data --ship --until=10m",
"0 */1 * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m 0 4 2 * * metrics-utility build_report",
"crontab -l",
"cat /var/log/cron",
"May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDOUT (No billing data for month: 2024-04) May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDEND (metrics-utility build_report) May 8 09:45:19 ip-10-0-6-23 crontab[51619]: (root) END EDIT (root) May 8 09:45:34 ip-10-0-6-23 crontab[51659]: (root) BEGIN EDIT (root) May 8 09:46:01 ip-10-0-6-23 CROND[51688]: (root) CMD (metrics-utility gather_automation_controller_billing_data --ship --until=10m) May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDOUT (/tmp/9e3f86ee-c92e-4b05-8217-72c496e6ffd9-2024-05-08-093402+0000-2024-05-08-093602+0000-0.tar.gz) May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDEND (metrics-utility gather_automation_controller_billing_data --ship --until=10m) May 8 09:46:26 ip-10-0-6-23 crontab[51659]: (root) END EDIT (root)",
"metrics-utility build_report",
"apiVersion: v1 kind: ConfigMap metadata: name: automationcontroller-metrics-utility-configmap data: METRICS_UTILITY_SHIP_TARGET: directory METRICS_UTILITY_SHIP_PATH: /metrics-utility METRICS_UTILITY_REPORT_TYPE: CCSP METRICS_UTILITY_PRICE_PER_NODE: '11' # in USD METRICS_UTILITY_REPORT_SKU: MCT3752MO METRICS_UTILITY_REPORT_SKU_DESCRIPTION: \"EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)\" METRICS_UTILITY_REPORT_H1_HEADING: \"CCSP Reporting <Company>: ANSIBLE Consumption\" METRICS_UTILITY_REPORT_COMPANY_NAME: \"Company Name\" METRICS_UTILITY_REPORT_EMAIL: \"[email protected]\" METRICS_UTILITY_REPORT_RHN_LOGIN: \"test_login\" METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER: \"BUSINESS LEADER\" METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER: \"PROCUREMENT LEADER\"",
"[cols=\"50%,50%\",options=\"header\"] |==== | *Parameter* | *Variable* | *`metrics_utility_enabled`* | True. | *`metrics_utility_cronjob_gather_schedule`* | @hourly or @daily. | *`metrics_utility_cronjob_report_schedule`* | @daily or @monthly. |====",
"scp -r username@controller_host:USDMETRICS_UTILITY_SHIP_PATH/data/<YYYY>/<MM>/ /local/directory/",
"- name: Copy directory from Kubernetes PVC to local machine hosts: localhost vars: report_dir_path: \"/mnt/metrics/reports/{{ year }}/{{ month }}/\" tasks: - name: Create a temporary pod to access PVC data kubernetes.core.k8s: definition: apiVersion: v1 kind: Pod metadata: name: temp-pod namespace: \"{{ namespace_name }}\" spec: containers: - name: busybox image: busybox command: [\"/bin/sh\"] args: [\"-c\", \"sleep 3600\"] # Keeps the container alive for 1 hour volumeMounts: - name: \"{{ pvc }}\" mountPath: \"/mnt/metrics\" volumes: - name: \"{{ pvc }}\" persistentVolumeClaim: claimName: automationcontroller-metrics-utility restartPolicy: Never register: pod_creation - name: Wait for both initContainer and main container to be ready kubernetes.core.k8s_info: kind: Pod namespace: \"{{ namespace_name }}\" name: temp-pod register: pod_status until: > pod_status.resources[0].status.containerStatuses[0].ready retries: 30 delay: 10 - name: Create a tarball of the directory of the report in the container kubernetes.core.k8s_exec: namespace: \"{{ namespace_name }}\" pod: temp-pod container: busybox command: tar czf /tmp/metrics.tar.gz -C \"{{ report_dir_path }}\" . register: tarball_creation - name: Copy the report tarball from the container to the local machine kubernetes.core.k8s_cp: namespace: \"{{ namespace_name }}\" pod: temp-pod container: busybox state: from_pod remote_path: /tmp/metrics.tar.gz local_path: \"{{ local_dir }}/metrics.tar.gz\" when: tarball_creation is succeeded - name: Ensure the local directory exists ansible.builtin.file: path: \"{{ local_dir }}\" state: directory - name: Extract the report tarball on the local machine ansible.builtin.unarchive: src: \"{{ local_dir }}/metrics.tar.gz\" dest: \"{{ local_dir }}\" remote_src: yes extra_opts: \"--strip-components=1\" when: tarball_creation is succeeded - name: Delete the temporary pod kubernetes.core.k8s: api_version: v1 kind: Pod namespace: \"{{ namespace_name }}\" name: temp-pod state: absent",
"crontab -e",
"*/2 * * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m */5 * * * * metrics-utili ty build_report",
"metrics_utility_cronjob_gather_schedule: metrics_utility_cronjob_report_schedule:"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/metrics-utility
|
Chapter 23. Service [v1]
|
Chapter 23. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 23.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 23.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" - "Cluster" routes traffic to all endpoints. - "Local" - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" routes traffic only to endpoints on the same node as the client pod (dropping the traffic if there are no local endpoints). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. Possible enum values: - "PreferDualStack" indicates that this service prefers dual-stack when the cluster is configured for dual-stack. If the cluster is not configured for dual-stack the service will be assigned a single IPFamily. If the IPFamily is not set in service.spec.ipFamilies then the service will be assigned the default IPFamily configured on the cluster - "RequireDualStack" indicates that this service requires dual-stack. Using IPFamilyPolicyRequireDualStack on a single stack cluster will result in validation errors. The IPFamilies (and their order) assigned to this service is based on service.spec.ipFamilies. If service.spec.ipFamilies was not provided then it will be assigned according to how they are configured on the cluster. If service.spec.ipFamilies has only one entry then the alternative IPFamily will be added by apiserver - "SingleStack" indicates that this service is required to have a single IPFamily. The IPFamily assigned is based on the default IPFamily used by the cluster or as identified by service.spec.ipFamilies field loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations. Using it is non-portable and it may not support dual-stack. Users are encouraged to use implementation-specific annotations when available. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 23.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 23.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described in https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior- * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 23.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 23.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 23.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 23.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 23.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 23.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ipMode string IPMode specifies how the load-balancer IP behaves, and may only be specified when the ip field is specified. Setting this to "VIP" indicates that traffic is delivered to the node with the destination set to the load-balancer's IP and port. Setting this to "Proxy" indicates that traffic is delivered to the node or pod with the destination set to the node's IP and node port or the pod's IP and port. Service implementations may use this information to adjust traffic routing. ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 23.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 23.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 23.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 23.2.1. /api/v1/services HTTP method GET Description list or watch objects of kind Service Table 23.1. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 23.2.2. /api/v1/watch/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 23.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 23.2.3. /api/v1/namespaces/{namespace}/services HTTP method DELETE Description delete collection of Service Table 23.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 23.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 23.5. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 23.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.7. Body parameters Parameter Type Description body Service schema Table 23.8. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 23.2.4. /api/v1/watch/namespaces/{namespace}/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 23.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 23.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 23.10. Global path parameters Parameter Type Description name string name of the Service HTTP method DELETE Description delete a Service Table 23.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 23.12. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 23.13. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 23.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.15. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 23.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.17. Body parameters Parameter Type Description body Service schema Table 23.18. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 23.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 23.19. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 23.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 23.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 23.21. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description read status of the specified Service Table 23.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 23.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.24. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 23.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.26. Body parameters Parameter Type Description body Service schema Table 23.27. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/service-v1
|
Chapter 4. Composing a RHEL for Edge image using image builder in RHEL web console
|
Chapter 4. Composing a RHEL for Edge image using image builder in RHEL web console Use RHEL image builder to create a custom RHEL for Edge image (OSTree commit). To access RHEL image builder and to create your custom RHEL for Edge image, you can either use the RHEL web console interface or the command line. You can compose RHEL for Edge images by using RHEL image builder in RHEL web console by performing the following high-level steps: Access RHEL image builder in RHEL web console Create a blueprint for RHEL for Edge image. Create a RHEL for Edge image. You can create the following images: RHEL for Edge Commit image. RHEL for Edge Container image. RHEL for Edge Installer image. Download the RHEL for Edge image 4.1. Accessing RHEL image builder in the RHEL web console To access RHEL image builder in RHEL web console, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites You have installed a RHEL system. You have administrative rights on the system. You have subscribed the RHEL system to Red Hat Subscription Manager (RHSM) or to Red Hat Satellite Server. Your system is powered on and accessible over the network. You have installed RHEL image builder on the system. Procedure On your RHEL system, access https://localhost:9090/ in a web browser. For more information about how to remotely access RHEL image builder, see Managing systems using the RHEL 8 web console document. Log in to the web console using an administrative user account. On the web console, in the left hand menu, click Apps . Click Image Builder . The RHEL image builder dashboard opens in the right pane. You can now proceed to create a blueprint for the RHEL for Edge images. 4.2. Creating a blueprint for a RHEL for Edge image using image builder in the web console To create a blueprint for a RHEL for Edge image by using RHEL image builder in RHEL web console, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites On a RHEL system, you have opened the RHEL image builder dashboard. Procedure On the RHEL image builder dashboard, click Create Blueprint . The Create Blueprint dialogue box opens. On the Details page: Enter the name of the blueprint and, optionally, its description. Click . Optional: In the Packages page: On the Available packages search, enter the package name and click the > button to move it to the Chosen packages field. Search and include as many packages as you want. Click . Note These customizations are all optional unless otherwise specified. On the Kernel page, enter a kernel name and the command-line arguments. On the File system page, select Use automatic partitioning . OSTree systems do not support filesystem customization, because OSTree images have their own mount rule, such as read-only. Click . On the Services page, you can enable or disable services: Enter the service names you want to enable or disable, separating them by a comma, by space, or by pressing the Enter key. Click . On the Firewall page, set up your firewall setting: Enter the Ports , and the firewall services you want to enable or disable. Click the Add zone button to manage your firewall rules for each zone independently. Click . On the Users page, add a users by following the steps: Click Add user . Enter a Username , a password , and a SSH key . You can also mark the user as a privileged user, by clicking the Server administrator checkbox. Click . On the Groups page, add groups by completing the following steps: Click the Add groups button: Enter a Group name and a Group ID . You can add more groups. Click . On the SSH keys page, add a key: Click the Add key button. Enter the SSH key. Enter a User . Click . On the Timezone page, set your timezone settings: On the Timezone field, enter the timezone you want to add to your system image. For example, add the following timezone format: "US/Eastern". If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default. Enter the NTP servers. Click . On the Locale page, complete the following steps: On the Keyboard search field, enter the package name you want to add to your system image. For example: ["en_US.UTF-8"]. On the Languages search field, enter the package name you want to add to your system image. For example: "us". Click . On the Others page, complete the following steps: On the Hostname field, enter the hostname you want to add to your system image. If you do not add a hostname, the operating system determines the hostname. Mandatory only for the Simplifier Installer image: On the Installation Devices field, enter a valid node for your system image. For example: dev/sda . Click . Mandatory only when building FIDO images: On the FIDO device onboarding page, complete the following steps: On the Manufacturing server URL field, enter the following information: On the DIUN public key insecure field, enter the insecure public key. On the DIUN public key hash field, enter the public key hash. On the DIUN public key root certs field, enter the public key root certs. Click . On the OpenSCAP page, complete the following steps: On the Datastream field, enter the datastream remediation instructions you want to add to your system image. On the Profile ID field, enter the profile_id security profile you want to add to your system image. Click . Mandatory only when building Ignition images: On the Ignition page, complete the following steps: On the Firstboot URL field, enter the package name you want to add to your system image. On the Embedded Data field, drag or upload your file. Click . . On the Review page, review the details about the blueprint. Click Create . The RHEL image builder view opens, listing existing blueprints. 4.3. Creating a RHEL for Edge image Create a RHEL for Edge image. Choose one of the following image types, according to your needs. 4.3.1. Creating a RHEL for Edge Commit image by using image builder in web console You can create a "RHEL for Edge Commit" image by using RHEL image builder in RHEL web console. The "RHEL for Edge Commit (.tar)" image type contains a full operating system, but it is not directly bootable. To boot the Commit image type, you must deploy it in a running container. Prerequisites On a RHEL system, you have accessed the RHEL image builder dashboard. Procedure On the RHEL image builder dashboard click Create Image . On the Image output page, perform the following steps: From the Select a blueprint dropdown menu, select the blueprint you want to use. From the Image output type dropdown list, select "RHEL for Edge Commit (.tar)" . Click . On the OSTree settings page, enter: Repository URL : specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. Parent commit : specify a commit, or leave it empty if you do not have a commit at this time. In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies rhel/8/USDARCH/edge . The "USDARCH" value is determined by the host machine. Click . On the Review page, check the customizations and click Create . RHEL image builder starts to create a RHEL for Edge Commit image for the blueprint that you created. Note The image creation process takes up to 20 minutes to complete. Verification To check the RHEL for Edge Commit image creation progress: Click the Images tab. After the image creation process is complete, you can download the resulting "RHEL for Edge Commit (.tar)" image. Additional resources Downloading a RHEL for Edge image 4.3.2. Creating a RHEL for Edge Container image by using RHEL image builder in RHEL web console You can create RHEL for Edge images by selecting "RHEL for Edge Container (.tar)" . The RHEL for Edge Container (.tar) image type creates an OSTree commit and embeds it into an OCI container with a web server. When the container is started, the web server serves the commit as an OSTree repository. Follow the steps in this procedure to create a RHEL for Edge Container image using image builder in RHEL web console. Prerequisites On a RHEL system, you have accessed the RHEL image builder dashboard. You have created a blueprint. Procedure On the RHEL image builder dashboard click Create Image . On the Image output page, perform the following steps: From the Select a blueprint dropdown menu, select the blueprint you want to use. From the Image output type dropdown list, select "RHEL for Edge Container (.tar)" . Click . On the OSTree page, enter: Repository URL : specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. By default, the repository folder for a RHEL for Edge Container image is "/repo". To find the correct URL to use, access the running container and check the nginx.conf file. To find which URL to use, access the running container and check the nginx.conf file. Inside the nginx.conf file, find the root directory entry to search for the /repo/ folder information. Note that, if you do not specify a repository URL when creating a RHEL for Edge Container image (.tar) by using RHEL image builder, the default /repo/ entry is created in the nginx.conf file. Parent commit : specify a commit, or leave it empty if you do not have a commit at this time. In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies rhel/8/USDARCH/edge . The "USDARCH" value is determined by the host machine. Click . On the Review page, check the customizations. Click Save blueprint . Click Create . RHEL image builder starts to create a RHEL for Edge Container image for the blueprint that you created. Note The image creation process takes up to 20 minutes to complete. Verification To check the RHEL for Edge Container image creation progress: Click the Images tab. After the image creation process is complete, you can download the resulting "RHEL for Edge Container (.tar)" image. Additional resources Downloading a RHEL for Edge image 4.3.3. Creating a RHEL for Edge Installer image by using image builder in RHEL web console You can create RHEL for Edge Installer images by selecting RHEL for Edge Installer (.iso) . The RHEL for Edge Installer (.iso) image type pulls the OSTree commit repository from the running container served by the RHEL for Edge Container (.tar) and creates an installable boot ISO image with a Kickstart file that is configured to use the embedded OSTree commit. Follow the steps in this procedure to create a RHEL for Edge image using image builder in RHEL web console. Prerequisites On a RHEL system, you have accessed the image builder dashboard. You created a blueprint. You created a RHEL for Edge Container image and loaded it into a running container. See Creating a RHEL for Edge Container image for non-network-based deployments . Procedure On the RHEL image builder dashboard click Create Image . On the Image output page, perform the following steps: From the Select a blueprint dropdown menu, select the blueprint you want to use. From the Image output type dropdown list, select RHEL for Edge Installer ( .iso ) image. Click . On the OSTree settings page, enter: Repository URL : specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies rhel/8/USDARCH/edge . The "USDARCH" value is determined by the host machine. Click . On the Review page, check the customizations. Click Save blueprint . Click Create . RHEL image builder starts to create a RHEL for Edge Installer image for the blueprint that you created. Note The image creation process takes up to 20 minutes to complete. Verification To check the RHEL for Edge Installer image creation progress: Click the Images tab. After the image creation process is complete, you can download the resulting RHEL for Edge Installer (.iso) image and boot the ISO image into a device. Additional resources Downloading a RHEL for Edge image 4.4. Downloading a RHEL for Edge image After you successfully create the RHEL for Edge image by using RHEL image builder, download the image on the local host. Procedure To download an image: From the More Options menu, click Download . The RHEL image builder tool downloads the file at your default download location. The downloaded file consists of a .tar file with an OSTree repository for RHEL for Edge Commit and RHEL for Edge Container images, or a .iso file for RHEL for Edge Installer images, with an OSTree repository. This repository contains the commit and a json file which contains information metadata about the repository content. 4.5. Additional resources Composing a RHEL for Edge image using image builder command-line .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/composing-rhel-for-edge-images-using-image-builder-in-rhel-web-console_composing-installing-managing-rhel-for-edge-images
|
Chapter 2. Installing a cluster on Nutanix
|
Chapter 2. Installing a cluster on Nutanix In OpenShift Container Platform version 4.13, you can choose one of the following options to install a cluster on your Nutanix instance: Using installer-provisioned infrastructure : Use the procedures in the following sections to use installer-provisioned infrastructure. Installer-provisioned infrastructure is ideal for installing in connected or disconnected network environments. The installer-provisioned infrastructure includes an installation program that provisions the underlying infrastructure for the cluster. Using the Assisted Installer : The Assisted Installer hosted at console.redhat.com . The Assisted Installer cannot be used in disconnected environments. The Assisted Installer does not provision the underlying infrastructure for the cluster, so you must provision the infrastructure before the running the Assisted Installer. Installing with the Assisted Installer also provides integration with Nutanix, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. Using user-provisioned infrastructure : Complete the relevant steps outlined in the Installing a cluster on any platform documentation. 2.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Internet access for Prism Central Prism Central requires internet access to obtain the Red Hat Enterprise Linux CoreOS (RHCOS) image that is required to install the cluster. The RHCOS image for Nutanix is available at rhcos.mirror.openshift.com . 2.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.6. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 2.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 2.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 2.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 2.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 2.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 2.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 2.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 2.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 2.7.1.4. Additional Nutanix configuration parameters Additional Nutanix configuration parameters are described in the following table: Table 2.4. Additional Nutanix cluster parameters Parameter Description Values compute.platform.nutanix.categories.key The name of a prism category key to apply to compute VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String compute.platform.nutanix.categories.value The value of a prism category key-value pair to apply to compute VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String compute.platform.nutanix.project.type The type of identifier you use to select a project for compute VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid compute.platform.nutanix.project.name or compute.platform.nutanix.project.uuid The name or UUID of a project with which compute VMs are associated. This parameter must be accompanied by the type parameter. String compute.platform.nutanix.bootType The boot type that the compute machines use. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . controlPlane.platform.nutanix.categories.key The name of a prism category key to apply to control plane VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String controlPlane.platform.nutanix.categories.value The value of a prism category key-value pair to apply to control plane VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String controlPlane.platform.nutanix.project.type The type of identifier you use to select a project for control plane VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid controlPlane.platform.nutanix.project.name or controlPlane.platform.nutanix.project.uuid The name or UUID of a project with which control plane VMs are associated. This parameter must be accompanied by the type parameter. String platform.nutanix.defaultMachinePlatform.categories.key The name of a prism category key to apply to all VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String platform.nutanix.defaultMachinePlatform.categories.value The value of a prism category key-value pair to apply to all VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String platform.nutanix.defaultMachinePlatform.project.type The type of identifier you use to select a project for all VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid . platform.nutanix.defaultMachinePlatform.project.name or platform.nutanix.defaultMachinePlatform.project.uuid The name or UUID of a project with which all VMs are associated. This parameter must be accompanied by the type parameter. String platform.nutanix.defaultMachinePlatform.bootType The boot type for all machines. You must use the Legacy boot type in OpenShift Container Platform 4.13. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . platform.nutanix.apiVIP The virtual IP (VIP) address that you configured for control plane API access. IP address platform.nutanix.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. IP address platform.nutanix.prismCentral.endpoint.address The Prism Central domain name or IP address. String platform.nutanix.prismCentral.endpoint.port The port that is used to log into Prism Central. String platform.nutanix.prismCentral.password The password for the Prism Central user name. String platform.nutanix.prismCentral.username The user name that is used to log into Prism Central. String platform.nutanix.prismElments.endpoint.address The Prism Element domain name or IP address. [ 1 ] String platform.nutanix.prismElments.endpoint.port The port that is used to log into Prism Element. String platform.nutanix.prismElements.uuid The universally unique identifier (UUID) for Prism Element. String platform.nutanix.subnetUUIDs The UUID of the Prism Element network that contains the virtual IP addresses and DNS records that you configured. [ 2 ] String platform.nutanix.clusterOSImage Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 The prismElements section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the OpenShift Container Platform cluster. Only a single Prism Element is supported. Only one subnet per OpenShift Container Platform cluster is supported. 2.7.2. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 10 12 15 16 17 18 19 21 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 13 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 14 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=nutanix \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on Nutanix 0000_26_cloud-controller-manager-operator_18_credentialsrequest-nutanix.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. Use the ccoctl tool to process all of the CredentialsRequest objects in the credrequests directory by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Specify the directory that contains the files of the component credentials secrets, under the manifests directory. By default, the ccoctl tool creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . To specify a different directory, use the --credentials-source-filepath flag. Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 2.10. Adding config map and secret resources required for Nutanix CCM Installations on Nutanix require additional ConfigMap and Secret resources to integrate with the Nutanix Cloud Controller Manager (CCM). Prerequisites You have created a manifests directory within your installation directory. Procedure Navigate to the manifests directory: USD cd <path_to_installation_directory>/manifests Create the cloud-conf ConfigMap file with the name openshift-cloud-controller-manager-cloud-config.yaml and add the following information: apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: "{ \"prismCentral\": { \"address\": \"<prism_central_FQDN/IP>\", 1 \"port\": 9440, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }" 1 Specify the Prism Central FQDN/IP. Verify that the file cluster-infrastructure-02-config.yml exists and has the following information: spec: cloudConfig: key: config name: cloud-provider-config 2.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 2.12. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 2.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 2.14. Additional resources About remote health monitoring 2.15. steps Opt out of remote health reporting Customize your cluster
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=nutanix --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"0000_26_cloud-controller-manager-operator_18_credentialsrequest-nutanix.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"cd <path_to_installation_directory>/manifests",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"",
"spec: cloudConfig: key: config name: cloud-provider-config",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_nutanix/installing-nutanix-installer-provisioned
|
Chapter 3. Manually upgrading using the roxctl CLI
|
Chapter 3. Manually upgrading using the roxctl CLI You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version. Important You need to perform the manual upgrade procedure only if you used the roxctl CLI to install RHACS. There are manual steps for each version upgrade that must be followed, for example, from version 3.74 to version 4.0, and from version 4.0 to version 4.1. Therefore, Red Hat recommends upgrading first from 3.74 to 4.0, then from 4.0 to 4.1, then 4.1 to 4.2, until the selected version is installed. For full functionality, Red Hat recommends upgrading to the most recent version. To upgrade RHACS to the latest version, perform the following steps: Backup the Central database Upgrade the roxctl CLI Upgrade the Central cluster Upgrade all secured clusters 3.1. Backing up the Central database You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster. Prerequisites You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read permissions for all resources. You have installed the roxctl CLI. You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables. Procedure Run the backup command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" central backup Additional resources Authenticating by using the roxctl CLI 3.2. Upgrading the roxctl CLI To upgrade the roxctl CLI to the latest version you must uninstall the existing version of roxctl CLI and then install the latest version of the roxctl CLI. 3.2.1. Uninstalling the roxctl CLI You can uninstall the roxctl CLI binary on Linux by using the following procedure. Procedure Find and delete the roxctl binary: USD ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1 1 Depending on your environment, you might need administrator rights to delete the roxctl binary. 3.2.2. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 3.2.3. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 3.2.4. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 3.3. Upgrading the Central cluster After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the step is to upgrade the Central cluster. This process involves upgrading Central and Scanner. 3.3.1. Upgrading Central You can update Central to the latest version by downloading and deploying the updated images. Procedure Run the following command to update the Central image: USD oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Verify that the new pods have deployed: USD oc get deploy -n stackrox -o wide USD oc get pod -n stackrox --watch 3.3.1.1. Editing the GOMEMLIMIT environment variable for the Central deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Central deployment: USD oc -n stackrox edit deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.3.2. Upgrading Scanner You can update Scanner to the latest version by downloading and deploying the updated images. Procedure Run the following command to update the Scanner image: USD oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Verify that the new pods have deployed: USD oc get deploy -n stackrox -o wide USD oc get pod -n stackrox --watch 3.3.2.1. Editing the GOMEMLIMIT environment variable for the Scanner deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Scanner deployment: USD oc -n stackrox edit deploy/scanner 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.3.3. Verifying the Central cluster upgrade After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete. Procedure Check the Central logs by running the following command: USD oc logs -n stackrox deploy/central -c central 1 1 If you use Kubernetes, enter kubectl instead of oc . Sample output of a successful upgrade No database restore directory found (this is not an error). Migrator: 2023/04/19 17:58:54: starting DB compaction Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2023/04/19 17:58:55 INFO: Replay took: 50.324ms badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here. badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We're good to go! 3.4. Upgrading all secured clusters After upgrading Central services, you must upgrade all secured clusters. Important If you are using automatic upgrades: Update all your secured clusters by using automatic upgrades. For information about troubleshooting problems with the automatic cluster upgrader, see Troubleshooting the cluster upgrader . Skip the instructions in this section and follow the instructions in the Verify upgrades and Revoking the API token sections. If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster. To ensure optimal functionality, use the same RHACS version for your secured clusters and the cluster on which Central is installed. To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions in this section. 3.4.1. Updating other images You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Update the Sensor image: USD oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Compliance image: USD oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Collector image: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the admission control image: USD oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 Important If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs). For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. 3.4.2. Adding POD_NAMESPACE to sensor and admission-control deployments When upgrading to version 4.6 or later from a version earlier than 4.6, you must patch the sensor and admission-control deployments to set the POD_NAMESPACE environment variable. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Patch sensor to ensure POD_NAMESPACE is set by running the following command: USD [[ -z "USD(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]' Patch admission-control to ensure POD_NAMESPACE is set by running the following command: USD [[ -z "USD(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}}]' steps Verifying secured cluster upgrade Additional resources Migrating SCCs during the manual upgrade 3.4.3. Migrating SCCs during the manual upgrade By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters. Procedure List all of the RHACS services that are deployed on Central and all secured clusters: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Example output Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #... In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for the Central cluster, complete the following steps: Create a file named update-central.yaml that defines the role and role binding resources by using the following content: Example 3.1. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-db-scc 2 namespace: stackrox 3 Rules: 4 - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-scanner-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.k ubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-db-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-db-scc subjects: 8 - kind: ServiceAccount name: central-db namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-scc subjects: - kind: ServiceAccount name: central namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: scanner-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-scanner-scc subjects: - kind: ServiceAccount name: scanner namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the update-central.yaml file by running the following command: USD oc -n stackrox create -f ./update-central.yaml To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps: Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content: Example 3.2. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command: USD oc -n stackrox create -f ./update-scs.yaml Important You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file. Delete the SCCs that are specific to RHACS: To delete the SCCs that are specific to the Central cluster, run the following command: USD oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scanner To delete the SCCs that are specific to all secured clusters, run the following command: USD oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor Important You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. Verification Ensure that all the pods are using the correct SCCs by running the following command: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Compare the output with the following table: Component custom SCC New Red Hat OpenShift 4 SCC Central stackrox-central nonroot-v2 Central-db stackrox-central-db nonroot-v2 Scanner stackrox-scanner nonroot-v2 Scanner-db stackrox-scanner nonroot-v2 Admission Controller stackrox-admission-control restricted-v2 Collector stackrox-collector privileged Sensor stackrox-sensor restricted-v2 3.4.3.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Sensor deployment: USD oc -n stackrox edit deploy/sensor 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.2. Editing the GOMEMLIMIT environment variable for the Collector deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Collector deployment: USD oc -n stackrox edit deploy/collector 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Admission Controller deployment: USD oc -n stackrox edit deploy/admission-control 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 3.4.3.4. Verifying secured cluster upgrade After you have upgraded secured clusters, verify that the updated pods are working. Procedure Check that the new pods have deployed: USD oc get deploy,ds -n stackrox -o wide 1 1 If you use Kubernetes, enter kubectl instead of oc . USD oc get pod -n stackrox --watch 1 1 If you use Kubernetes, enter kubectl instead of oc . 3.5. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' Additional resources Scanning RHCOS node hosts 3.6. Rolling back Central You can roll back to a version of Central if the upgrade to a new version is unsuccessful. 3.6.1. Rolling back Central normally You can roll back to a version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails. Prerequisites Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you might not be able to roll back to an earlier version. Procedure Run the following command to roll back to a version when an upgrade fails (before the Central service starts): USD oc -n stackrox rollout undo deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . 3.6.2. Rolling back Central forcefully You can use forced rollback to roll back to an earlier version of Central (after the Central service starts). Important Using forced rollback to switch back to a version might result in loss of data and functionality. Prerequisites Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version. Procedure Run the following commands to perform a forced rollback: To forcefully rollback to the previously installed version: USD oc -n stackrox rollout undo deploy/central 1 1 If you use Kubernetes, enter kubectl instead of oc . To forcefully rollback to a specific version: Edit Central's ConfigMap : USD oc -n stackrox edit configmap/central-config 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the value of the maintenance.forceRollbackVersion key: data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> 1 ... 1 Specify the version that you want to roll back to. Update the Central image version: USD oc -n stackrox \ 1 set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Specify the version that you want to roll back to. It must be the same version that you specified for the maintenance.forceRollbackVersion key in the central-config config map. 3.7. Verifying upgrades The updated Sensors and Collectors continue to report the latest data from each secured cluster. The last time Sensor contacted Central is visible in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration System Health . Check to ensure that Sensor Upgrade shows clusters up to date with Central. 3.8. Revoking the API token For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup. Prerequisites After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Authentication Tokens category, and click API Token . Select the checkbox in front of the token name that you want to revoke. Click Revoke . On the confirmation dialog box, click Confirm . 3.9. Troubleshooting the cluster upgrader If you encounter problems when using the legacy installation method for the secured cluster and enabling the automated updates, you can try troubleshooting the problem. The following errors can be found in the clusters view when the upgrader fails. 3.9.1. Upgrader is missing permissions Symptom The following error is displayed in the cluster page: Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage "Run preflight checks": preflight check "Kubernetes authorization" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?" Procedure Ensure that the bundle for the secured cluster was generated with future upgrades enabled before clicking Download YAML file and keys . If possible, remove that secured cluster and generate a new bundle making sure that future upgrades are enabled. If you cannot re-create the cluster, you can take these actions: Ensure that the service account sensor-upgrader exists in the same namespace as Sensor. Ensure that a ClusterRoleBinding exists (default name: <namespace>:upgrade-sensors ) that grants the cluster-admin ClusterRole to the sensor-upgrader service account. 3.9.2. Upgrader cannot start due to missing image Symptom The following error is displayed in the cluster page: "Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)" Procedure Ensure that the Secured Cluster can access the registry and pull the image <image_reference:tag> . Ensure that the image pull secrets are configured correctly in the secured cluster. 3.9.3. Upgrader cannot start due to an unknown reason Symptom The following error is displayed in the cluster page: "Upgrade initialization error: Pod terminated: (Error)" Procedure Ensure that the upgrader has enough permissions for accessing the cluster objects. For more information, see "Upgrader is missing permissions". Check the upgrader logs for more insights. 3.9.3.1. Obtaining upgrader logs The logs can be accessed by running the following command: USD kubectl -n <namespace> logs deploy/sensor-upgrader 1 1 For <namespace> , specify the namespace in which Sensor is running. Usually, the upgrader deployment is only running in the cluster for a short time while doing the upgrades. It is removed later, so accessing its logs using the orchestrator CLI can require proper timing.
|
[
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central backup",
"ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe",
"roxctl version",
"oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1",
"oc get deploy -n stackrox -o wide",
"oc get pod -n stackrox --watch",
"oc -n stackrox edit deploy/central 1",
"oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.7.0 1",
"oc get deploy -n stackrox -o wide",
"oc get pod -n stackrox --watch",
"oc -n stackrox edit deploy/scanner 1",
"oc logs -n stackrox deploy/central -c central 1",
"No database restore directory found (this is not an error). Migrator: 2023/04/19 17:58:54: starting DB compaction Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357 badger 2023/04/19 17:58:55 INFO: Replay took: 50.324ms badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here. badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]} version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We're good to go!",
"oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1",
"oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.0 1",
"oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0",
"[[ -z \"USD(oc -n stackrox get deployment sensor -o yaml | grep POD_NAMESPACE)\" ]] && oc -n stackrox patch deployment sensor --type=json -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/env/-\",\"value\":{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}}]'",
"[[ -z \"USD(oc -n stackrox get deployment admission-control -o yaml | grep POD_NAMESPACE)\" ]] && oc -n stackrox patch deployment admission-control --type=json -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/env/-\",\"value\":{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}}]'",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-db-scc 2 namespace: stackrox 3 Rules: 4 - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-central-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: use-scanner-scc namespace: stackrox rules: - apiGroups: - security.openshift.io resourceNames: - nonroot-v2 resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.k ubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-db-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-db-scc subjects: 8 - kind: ServiceAccount name: central-db namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: central app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: central-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-central-scc subjects: - kind: ServiceAccount name: central namespace: stackrox - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: scanner app.kubernetes.io/instance: stackrox-central-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-central-services app.kubernetes.io/version: 4.4.0 name: scanner-use-scc namespace: stackrox roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: use-scanner-scc subjects: - kind: ServiceAccount name: scanner namespace: stackrox - - -",
"oc -n stackrox create -f ./update-central.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -",
"oc -n stackrox create -f ./update-scs.yaml",
"oc delete scc/stackrox-central scc/stackrox-central-db scc/stackrox-scanner",
"oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"oc -n stackrox edit deploy/sensor 1",
"oc -n stackrox edit deploy/collector 1",
"oc -n stackrox edit deploy/admission-control 1",
"oc get deploy,ds -n stackrox -o wide 1",
"oc get pod -n stackrox --watch 1",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'",
"oc -n stackrox rollout undo deploy/central 1",
"oc -n stackrox rollout undo deploy/central 1",
"oc -n stackrox edit configmap/central-config 1",
"data: central-config.yaml: | maintenance: safeMode: false compaction: enabled: true bucketFillFraction: .5 freeFractionThreshold: 0.75 forceRollbackVersion: <x.x.x.x> 1",
"oc -n stackrox \\ 1 set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> 2",
"Upgrader failed to execute PreflightStage of the roll-forward workflow: executing stage \"Run preflight checks\": preflight check \"Kubernetes authorization\" reported errors. This usually means that access is denied. Have you configured this Secured Cluster for automatically receiving upgrades?\"",
"\"Upgrade initialization error: The upgrader pods have trouble pulling the new image: Error pulling image: (...) (<image_reference:tag>: not found)\"",
"\"Upgrade initialization error: Pod terminated: (Error)\"",
"kubectl -n <namespace> logs deploy/sensor-upgrader 1"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/upgrading/upgrade-roxctl
|
Chapter 4. Securing the Fuse Console
|
Chapter 4. Securing the Fuse Console To secure the Fuse Console on Spring Boot: Disable the Fuse Console's proxy servlet when deploying to AWS If you want to deploy a standalone Fuse application to Amazon Web Services (AWS), you should disable the Fuse Console's proxy servlet by setting the hawtio.disableProxy system property to true . Note When you disable the Fuse Console proxy servlet, the Fuse Console's Connect tab is disabled and you cannot connect to other JVMs from the Fuse Console. If you want to deploy more than one Fuse application on AWS, you must deploy the Fuse Console for each application. Set HTTPS as the required protocol You can use the hawtio.http.strictTransportSecurity property to require web browsers to use the secure HTTPS protocol to access the Fuse Console. This property specifies that web browsers that try to use HTTP to access the Fuse Console must automatically convert the request to use HTTPS. Use public keys to secure responses You can use the hawtio.http.publicKeyPins property to secure the HTTPS protocol by telling the web browser to associate a specific cryptographic public key with the Fuse Console to decrease the risk of "man-in-the-middle" attacks with forged certificates. Procedure Set the hawtio.http.strictTransportSecurity and hawtio.http.publicKeyPins properties as shown in the following example: (For deploying on AWS only) To disable the Fuse Console's proxy servlet, set the hawtio.disableProxy property as shown in the following example: Additional resources For a description of the hawtio.http.strictTransportSecurity property's syntax, see the description page for the HTTP Strict Transport Security (HSTS) response header. For a description of the hawtio.http.publicKeyPins property's syntax, including instructions on how to extract the Base64 encoded public key, see the description page for the HTTP Public Key Pinning response header.
|
[
"public static void main(String[] args) { System.setProperty(\"hawtio.http.strictTransportSecurity\", \"max-age=31536000; includeSubDomains; preload\"); System.setProperty(\"hawtio.http.publicKeyPins\", \"pin-sha256=cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs\"; max-age=5184000; includeSubDomains\"); SpringApplication.run(YourSpringBootApplication.class, args); }",
"public static void main(String[] args) { System.setProperty(\"hawtio.disableProxy\", \"true\"); }"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/fuse-console-security-springboot
|
Part IV. Part IV: Managing the subsystem instances
|
Part IV. Part IV: Managing the subsystem instances
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/part_iv_managing_the_subsystem_instances
|
Chapter 1. Preparing to deploy OpenShift Data Foundation
|
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services. Then, all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . Enable file system access on Red Hat Enterprise Linux based hosts for worker nodes. See enable file system access for containers on Red Hat Enterprise Linux based nodes . Note Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS). Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow the below steps in the order given: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation. The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . Arbiter stretch cluster requirements [Technology Preview] In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises. For detailed requirements and instructions, see Configuring OpenShift Data Foundation for Metro-DR stretch cluster . Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. For more information, see Resource requirements section in the Planning guide. 1.2. Enabling file system access for containers on Red Hat Enterprise Linux based nodes Deploying OpenShift Data Foundation on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system. Note Skip this step for hosts based on Red Hat Enterprise Linux CoreOS (RHCOS). Procedure Log in to the Red Hat Enterprise Linux based node and open a terminal. For each node in your cluster: Verify that the node has access to the rhel-7-server-extras-rpms repository. If you do not see both rhel-7-server-rpms and rhel-7-server-extras-rpms in the output, or if there is no output, run the following commands to enable each repository: Install the required packages. Persistently enable container use of the Ceph file system in SELinux. 1.3. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy.
|
[
"subscription-manager repos --list-enabled | grep rhel-7-server",
"subscription-manager repos --enable=rhel-7-server-rpms",
"subscription-manager repos --enable=rhel-7-server-extras-rpms",
"yum install -y policycoreutils container-selinux",
"setsebool -P container_use_cephfs on",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preparing_to_deploy_openshift_data_foundation
|
probe::nfsd.proc.remove
|
probe::nfsd.proc.remove Name probe::nfsd.proc.remove - NFS server removing a file for client Synopsis nfsd.proc.remove Values gid requester's group id fh file handle (the first part is the length of the file handle) filelen length of file name uid requester's user id version nfs version proto transfer protocol filename file name client_ip the ip address of client
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-proc-remove
|
Providing feedback on Red Hat JBoss Core Services documentation
|
Providing feedback on Red Hat JBoss Core Services documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_modsecurity_guide/providing-direct-documentation-feedback_jbcs-mod_sec-guide
|
Chapter 103. Netty
|
Chapter 103. Netty Both producer and consumer are supported The Netty component in Camel is a socket communication component, based on the Netty project version 4. Netty is a NIO client server framework which enables quick and easy development of networkServerInitializerFactory applications such as protocol servers and clients. Netty greatly simplifies and streamlines network programming such as TCP and UDP socket server. This camel component supports both producer and consumer endpoints. The Netty component has several options and allows fine-grained control of a number of TCP/UDP communication parameters (buffer sizes, keepAlives, tcpNoDelay, etc) and facilitates both In-Only and In-Out communication on a Camel route. 103.1. Dependencies When using netty with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency> 103.2. URI format The URI scheme for a netty component is as follows This component supports producer and consumer endpoints for both TCP and UDP. 103.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 103.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 103.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 103.4. Component Options The Netty component supports 73 options, which are listed below. Name Description Default Type configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean executorService (consumer (advanced)) To use the given EventExecutorGroup. EventExecutorGroup maximumPoolSize (consumer (advanced)) Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 103.5. Endpoint Options The Netty endpoint is configured using URI syntax: with the following path and query parameters: 103.5.1. Path Parameters (3 parameters) Name Description Default Type protocol (common) Required The protocol to use which can be tcp or udp. Enum values: tcp udp String host (common) Required The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to. String port (common) Required The host port number. int 103.5.2. Query Parameters (71 parameters) Name Description Default Type disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 103.6. Registry based Options Codec Handlers and SSL Keystores can be enlisted in the Registry, such as in the Spring XML file. The values that could be passed in, are the following: Name Description passphrase password setting to use in order to encrypt/decrypt payloads sent using SSH keyStoreFormat keystore format to be used for payload encryption. Defaults to "JKS" if not set securityProvider Security provider to be used for payload encryption. Defaults to "SunX509" if not set. keyStoreFile deprecated: Client side certificate keystore to be used for encryption trustStoreFile deprecated: Server side certificate keystore to be used for encryption keyStoreResource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. trustStoreResource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. sslHandler Reference to a class that could be used to return an SSL Handler encoder A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. Must override io.netty.channel.ChannelInboundHandlerAdapter. encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. decoder A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. Must override io.netty.channel.ChannelOutboundHandlerAdapter. decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. Note Read below about using non shareable encoders/decoders. 103.6.1. Using non shareable encoders or decoders If your encoders or decoders are not shareable (e.g. they don't have the @Shareable class annotation), then your encoder/decoder must implement the org.apache.camel.component.netty.ChannelHandlerFactory interface, and return a new instance in the newChannelHandler method. This is to ensure the encoder/decoder can safely be used. If this is not the case, then the Netty component will log a WARN when an endpoint is created. The Netty component offers a org.apache.camel.component.netty.ChannelHandlerFactories factory class, that has a number of commonly used methods. 103.7. Sending Messages to/from a Netty endpoint 103.7.1. Netty Producer In Producer mode, the component provides the ability to send payloads to a socket endpoint using either TCP or UDP protocols (with optional SSL support). The producer mode supports both one-way and request-response based operations. 103.7.2. Netty Consumer In Consumer mode, the component provides the ability to: listen on a specified socket using either TCP or UDP protocols (with optional SSL support), receive requests on the socket using text/xml, binary and serialized object based payloads and send them along on a route as message exchanges. The consumer mode supports both one-way and request-response based operations. 103.8. Examples 103.8.1. A UDP Netty endpoint using Request-Reply and serialized object payload Note that Object serialization is not allowed by default, and so a decoder must be configured. @BindToRegistry("decoder") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody("Message received); } } } }; 103.8.2. A TCP based Netty consumer endpoint using One-way communication RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:tcp://0.0.0.0:5150") .to("mock:result"); } }; 103.8.3. An SSL/TCP based Netty consumer endpoint using Request-Reply communication Using the JSSE Configuration Utility The Netty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Netty component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent("netty", NettyComponent.class); nettyComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters"/> ... Using Basic SSL/TLS configuration on the Jetty Component Registry registry = context.getRegistry(); registry.bind("password", "changeit"); registry.bind("ksf", new File("src/test/resources/keystore.jks")); registry.bind("tsf", new File("src/test/resources/keystore.jks")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); Getting access to SSLSession and the client certificate You can get access to the javax.net.ssl.SSLSession if you eg need to get details about the client certificate. When ssl=true then the Netty component will store the SSLSession as a header on the Camel Message as shown below: SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN(); Remember to set needClientAuth=true to authenticate the client, otherwise SSLSession cannot access information about the client certificate, and you may get an exception javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated . You may also get this exception if the client certificate is expired or not valid etc. Note The option sslClientCertHeaders can be set to true which then enriches the Camel Message with headers having details about the client certificate. For example the subject name is readily available in the header CamelNettySSLClientCertSubjectName . 103.8.4. Using Multiple Codecs In certain cases it may be necessary to add chains of encoders and decoders to the netty pipeline. To add multpile codecs to a camel netty endpoint the 'encoders' and 'decoders' uri parameters should be used. Like the 'encoder' and 'decoder' parameters they are used to supply references (lists of ChannelUpstreamHandlers and ChannelDownstreamHandlers) that should be added to the pipeline. Note that if encoders is specified then the encoder param will be ignored, similarly for decoders and the decoder param. Note Read further above about using non shareable encoders/decoders. The lists of codecs need to be added to the Camel's registry so they can be resolved when the endpoint is created. ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind("length-decoder", lengthDecoder); registry.bind("string-decoder", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind("length-encoder", lengthEncoder); registry.bind("string-encoder", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind("encoders", encoders); registry.bind("decoders", decoders); Spring's native collections support can be used to specify the codec lists in an application context <util:list id="decoders" list-class="java.util.LinkedList"> <bean class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringDecoder"/> </util:list> <util:list id="encoders" list-class="java.util.LinkedList"> <bean class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringEncoder"/> </util:list> <bean id="length-encoder" class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean id="string-encoder" class="io.netty.handler.codec.string.StringEncoder"/> <bean id="length-decoder" class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean id="string-decoder" class="io.netty.handler.codec.string.StringDecoder"/> The bean names can then be used in netty endpoint definitions either as a comma separated list or contained in a List e.g. from("direct:multiple-codec").to("netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false"); from("netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec"); or via XML. <camelContext id="multiple-netty-codecs-context" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:multiple-codec"/> <to uri="netty:tcp://0.0.0.0:5150?encoders=#encoders&sync=false"/> </route> <route> <from uri="netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&sync=false"/> <to uri="mock:multiple-codec"/> </route> </camelContext> 103.9. Closing Channel When Complete When acting as a server you sometimes want to close the channel when, for example, a client conversion is finished. You can do this by simply setting the endpoint option disconnect=true . However you can also instruct Camel on a per message basis as follows. To instruct Camel to close the channel, you should add a header with the key CamelNettyCloseChannelWhenComplete set to a boolean true value. For instance, the example below will close the channel after it has written the bye message back to the client: from("netty:tcp://0.0.0.0:8080").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody("Bye " + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } }); Adding custom channel pipeline factories to gain complete control over a created pipeline. 103.10. Custom pipeline Custom channel pipelines provide complete control to the user over the handler/interceptor chain by inserting custom handler(s), encoder(s) & decoder(s) without having to specify them in the Netty Endpoint URL in a very simple way. In order to add a custom pipeline, a custom channel pipeline factory must be created and registered with the context via the context registry (Registry, or the camel-spring ApplicationContextRegistry etc). A custom pipeline factory must be constructed as follows A Producer linked channel pipeline factory must extend the abstract class ClientPipelineFactory . A Consumer linked channel pipeline factory must extend the abstract class ServerInitializerFactory . The classes should override the initChannel() method in order to insert custom handler(s), encoder(s) and decoder(s). Not overriding the initChannel() method creates a pipeline with no handlers, encoders or decoders wired to the pipeline. The example below shows how ServerInitializerFactory factory may be created 103.10.1. Using custom pipeline factory public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); } } The custom channel pipeline factory can then be added to the registry and instantiated/utilized on a camel route in the following way Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind("spf", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); 103.11. Reusing Netty boss and worker thread pools Netty has two kind of thread pools: boss and worker. By default each Netty consumer and producer has their private thread pools. If you want to reuse these thread pools among multiple consumers or producers then the thread pools must be created and enlisted in the Registry. For example using Spring XML we can create a shared worker thread pool using the NettyWorkerPoolBuilder with 2 worker threads as shown below: <!-- use the worker pool builder to help create the shared thread pool --> <bean id="poolBuilder" class="org.apache.camel.component.netty.NettyWorkerPoolBuilder"> <property name="workerCount" value="2"/> </bean> <!-- the shared worker thread pool --> <bean id="sharedPool" class="org.jboss.netty.channel.socket.nio.WorkerPool" factory-bean="poolBuilder" factory-method="build" destroy-method="shutdown"> </bean> Note For boss thread pool there is a org.apache.camel.component.netty.NettyServerBossPoolBuilder builder for Netty consumers, and a org.apache.camel.component.netty.NettyClientBossPoolBuilder for the Netty producers. Then in the Camel routes we can refer to this worker pools by configuring the workerPool option in the URI as shown below: <route> <from uri="netty:tcp://0.0.0.0:5021?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route> And if we have another route we can refer to the shared worker pool: <route> <from uri="netty:tcp://0.0.0.0:5022?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route> and so forth. 103.12. Multiplexing concurrent messages over a single connection with request/reply When using Netty for request/reply messaging via the netty producer then by default each message is sent via a non-shared connection (pooled). This ensures that replies are automatic being able to map to the correct request thread for further routing in Camel. In other words correlation between request/reply messages happens out-of-the-box because the replies comes back on the same connection that was used for sending the request; and this connection is not shared with others. When the response comes back, the connection is returned back to the connection pool, where it can be reused by others. However if you want to multiplex concurrent request/responses on a single shared connection, then you need to turn off the connection pooling by setting producerPoolEnabled=false . Now this means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager=#myManager option. Note We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. You can find an example with the Apache Camel source code in the examples directory under the camel-example-netty-custom-correlation directory. 103.13. Spring Boot Auto-Configuration The component supports 74 options, which are listed below. Name Description Default Type camel.component.netty.allow-default-codec The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true Boolean camel.component.netty.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.auto-append-delimiter Whether or not to auto append missing end delimiter when sending using the textline codec. true Boolean camel.component.netty.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.netty.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 Integer camel.component.netty.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup camel.component.netty.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.netty.broadcast Setting to choose Multicast over UDP. false Boolean camel.component.netty.channel-group To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. ChannelGroup camel.component.netty.client-initializer-factory To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. ClientInitializerFactory camel.component.netty.client-mode If the clientMode is true, netty consumer will connect the address as a TCP client. false Boolean camel.component.netty.configuration To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. NettyConfiguration camel.component.netty.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. NettyCamelStateCorrelationManager camel.component.netty.decoder-max-line-length The max line length to use for the textline codec. 1024 Integer camel.component.netty.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.delimiter The delimiter to use for the textline codec. Possible values are LINE and NULL. TextLineDelimiter camel.component.netty.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty.enabled Whether to enable auto configuration of the netty component. This is enabled by default. Boolean camel.component.netty.enabled-protocols Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String camel.component.netty.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.encoding The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String camel.component.netty.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. EventExecutorGroup camel.component.netty.hostname-verification To enable/disable hostname verification on SSLEngine. false Boolean camel.component.netty.keep-alive Setting to ensure socket is not closed due to inactivity. true Boolean camel.component.netty.key-store-file Client side certificate keystore to be used for encryption. File camel.component.netty.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set. String camel.component.netty.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.netty.maximum-pool-size Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. Integer camel.component.netty.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false Boolean camel.component.netty.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. NettyServerBootstrapFactory camel.component.netty.network-interface When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String camel.component.netty.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH. String camel.component.netty.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty.producer-pool-max-total Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty.reconnect Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true Boolean camel.component.netty.reconnect-interval Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 Integer camel.component.netty.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty.reuse-address Setting to facilitate socket multiplexing. true Boolean camel.component.netty.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty.server-initializer-factory To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. ServerInitializerFactory camel.component.netty.ssl Setting to specify whether SSL encryption is applied to this endpoint. false Boolean camel.component.netty.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.netty.ssl-handler Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. SslHandler camel.component.netty.sync Setting to set endpoint as one-way or request-response. true Boolean camel.component.netty.tcp-no-delay Setting to improve TCP protocol performance. true Boolean camel.component.netty.textline Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false Boolean camel.component.netty.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.trust-store-file Server side certificate keystore to be used for encryption. File camel.component.netty.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.udp-byte-array-codec For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false Boolean camel.component.netty.udp-connectionless-sending This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false Boolean camel.component.netty.use-byte-buf If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false Boolean camel.component.netty.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. Integer camel.component.netty.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency>",
"netty:tcp://0.0.0.0:99999[?options] netty:udp://remotehost:99999/[?options]",
"netty:protocol://host:port",
"@BindToRegistry(\"decoder\") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder\") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody(\"Message received); } } } };",
"RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:tcp://0.0.0.0:5150\") .to(\"mock:result\"); } };",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent(\"netty\", NettyComponent.class); nettyComponent.setSslContextParameters(scp);",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters\"/>",
"Registry registry = context.getRegistry(); registry.bind(\"password\", \"changeit\"); registry.bind(\"ksf\", new File(\"src/test/resources/keystore.jks\")); registry.bind(\"tsf\", new File(\"src/test/resources/keystore.jks\")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password\" + \"&keyStoreFile=#ksf&trustStoreFile=#tsf\"; String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });",
"SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN();",
"ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind(\"length-decoder\", lengthDecoder); registry.bind(\"string-decoder\", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind(\"length-encoder\", lengthEncoder); registry.bind(\"string-encoder\", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind(\"encoders\", encoders); registry.bind(\"decoders\", decoders);",
"<util:list id=\"decoders\" list-class=\"java.util.LinkedList\"> <bean class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringDecoder\"/> </util:list> <util:list id=\"encoders\" list-class=\"java.util.LinkedList\"> <bean class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringEncoder\"/> </util:list> <bean id=\"length-encoder\" class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-encoder\" class=\"io.netty.handler.codec.string.StringEncoder\"/> <bean id=\"length-decoder\" class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-decoder\" class=\"io.netty.handler.codec.string.StringDecoder\"/>",
"from(\"direct:multiple-codec\").to(\"netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false\"); from(\"netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false\").to(\"mock:multiple-codec\");",
"<camelContext id=\"multiple-netty-codecs-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:multiple-codec\"/> <to uri=\"netty:tcp://0.0.0.0:5150?encoders=#encoders&sync=false\"/> </route> <route> <from uri=\"netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&sync=false\"/> <to uri=\"mock:multiple-codec\"/> </route> </camelContext>",
"from(\"netty:tcp://0.0.0.0:8080\").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody(\"Bye \" + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } });",
"public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast(\"encoder-SD\", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast(\"decoder-DELIM\", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast(\"decoder-SD\", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast(\"handler\", new ServerChannelHandler(consumer)); } }",
"Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind(\"spf\", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf\" String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });",
"<!-- use the worker pool builder to help create the shared thread pool --> <bean id=\"poolBuilder\" class=\"org.apache.camel.component.netty.NettyWorkerPoolBuilder\"> <property name=\"workerCount\" value=\"2\"/> </bean> <!-- the shared worker thread pool --> <bean id=\"sharedPool\" class=\"org.jboss.netty.channel.socket.nio.WorkerPool\" factory-bean=\"poolBuilder\" factory-method=\"build\" destroy-method=\"shutdown\"> </bean>",
"<route> <from uri=\"netty:tcp://0.0.0.0:5021?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>",
"<route> <from uri=\"netty:tcp://0.0.0.0:5022?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-netty-component-starter
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/using_your_subscription
|
Chapter 41. PodDisruptionBudgetTemplate schema reference
|
Chapter 41. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. AMQ Streams creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 41.1. PodDisruptionBudgetTemplate schema properties Property Description metadata Metadata to apply to the PodDisruptionBudgetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer
|
[
"template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-PodDisruptionBudgetTemplate-reference
|
Chapter 21. File Systems
|
Chapter 21. File Systems Btrfs file system, see the section called "Support of Btrfs File System" OverlayFS , see the section called "OverlayFS"
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-tp-file_systems
|
Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin
|
Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available.
|
[
"oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/enabling-the-red-hat-openshift-data-foundation-console-plugin-option_rhodf
|
Chapter 67. router
|
Chapter 67. router This chapter describes the commands under the router command. 67.1. router add port Add a port to a router Usage: Table 67.1. Positional arguments Value Summary <router> Router to which port will be added (name or id) <port> Port to be added (name or id) Table 67.2. Command arguments Value Summary -h, --help Show this help message and exit 67.2. router add subnet Add a subnet to a router Usage: Table 67.3. Positional arguments Value Summary <router> Router to which subnet will be added (name or id) <subnet> Subnet to be added (name or id) Table 67.4. Command arguments Value Summary -h, --help Show this help message and exit 67.3. router create Create a new router Usage: Table 67.5. Positional arguments Value Summary <name> New router name Table 67.6. Command arguments Value Summary -h, --help Show this help message and exit --enable Enable router (default) --disable Disable router --distributed Create a distributed router --centralized Create a centralized router --ha Create a highly available router --no-ha Create a legacy router --description <description> Set router description --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --availability-zone-hint <availability-zone> Availability zone in which to create this router (Router Availability Zone extension required, repeat option to set multiple availability zones) --tag <tag> Tag to be added to the router (repeat option to set multiple tags) --no-tag No tags associated with the router Table 67.7. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 67.8. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 67.9. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 67.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.4. router delete Delete router(s) Usage: Table 67.11. Positional arguments Value Summary <router> Router(s) to delete (name or id) Table 67.12. Command arguments Value Summary -h, --help Show this help message and exit 67.5. router list List routers Usage: Table 67.13. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List routers according to their name --enable List enabled routers --disable List disabled routers --long List additional fields in output --project <project> List routers according to their project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --agent <agent-id> List routers hosted by an agent (id only) --tags <tag>[,<tag>,... ] List routers which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List routers which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude routers which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude routers which have any given tag(s) (comma- separated list of tags) Table 67.14. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 67.15. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 67.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 67.17. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.6. router remove port Remove a port from a router Usage: Table 67.18. Positional arguments Value Summary <router> Router from which port will be removed (name or id) <port> Port to be removed and deleted (name or id) Table 67.19. Command arguments Value Summary -h, --help Show this help message and exit 67.7. router remove subnet Remove a subnet from a router Usage: Table 67.20. Positional arguments Value Summary <router> Router from which the subnet will be removed (name or id) <subnet> Subnet to be removed (name or id) Table 67.21. Command arguments Value Summary -h, --help Show this help message and exit 67.8. router set Set router properties Usage: Table 67.22. Positional arguments Value Summary <router> Router to modify (name or id) Table 67.23. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set router name --description <description> Set router description --enable Enable router --disable Disable router --distributed Set router to distributed mode (disabled router only) --centralized Set router to centralized mode (disabled router only) --route destination=<subnet>,gateway=<ip-address> Routes associated with the router destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to set multiple routes) --no-route Clear routes associated with the router. specify both --route and --no-route to overwrite current value of route. --ha Set the router as highly available (disabled router only) --no-ha Clear high availability attribute of the router (disabled router only) --external-gateway <network> External network used as router's gateway (name or id) --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet (name or id) on external gateway: subnet=<subnet>,ip-address=<ip-address> (repeat option to set multiple fixed IP addresses) --enable-snat Enable source nat on external gateway --disable-snat Disable source nat on external gateway --qos-policy <qos-policy> Attach qos policy to router gateway ips --no-qos-policy Remove qos policy from router gateway ips --tag <tag> Tag to be added to the router (repeat option to set multiple tags) --no-tag Clear tags associated with the router. specify both --tag and --no-tag to overwrite current tags 67.9. router show Display router details Usage: Table 67.24. Positional arguments Value Summary <router> Router to display (name or id) Table 67.25. Command arguments Value Summary -h, --help Show this help message and exit Table 67.26. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 67.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 67.28. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 67.29. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.10. router unset Unset router properties Usage: Table 67.30. Positional arguments Value Summary <router> Router to modify (name or id) Table 67.31. Command arguments Value Summary -h, --help Show this help message and exit --route destination=<subnet>,gateway=<ip-address> Routes to be removed from the router destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to unset multiple routes) --external-gateway Remove external gateway information from the router --qos-policy Remove qos policy from router gateway ips --tag <tag> Tag to be removed from the router (repeat option to remove multiple tags) --all-tag Clear all tags associated with the router
|
[
"openstack router add port [-h] <router> <port>",
"openstack router add subnet [-h] <router> <subnet>",
"openstack router create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--enable | --disable] [--distributed | --centralized] [--ha | --no-ha] [--description <description>] [--project <project>] [--project-domain <project-domain>] [--availability-zone-hint <availability-zone>] [--tag <tag> | --no-tag] <name>",
"openstack router delete [-h] <router> [<router> ...]",
"openstack router list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name <name>] [--enable | --disable] [--long] [--project <project>] [--project-domain <project-domain>] [--agent <agent-id>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack router remove port [-h] <router> <port>",
"openstack router remove subnet [-h] <router> <subnet>",
"openstack router set [-h] [--name <name>] [--description <description>] [--enable | --disable] [--distributed | --centralized] [--route destination=<subnet>,gateway=<ip-address>] [--no-route] [--ha | --no-ha] [--external-gateway <network>] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>] [--enable-snat | --disable-snat] [--qos-policy <qos-policy> | --no-qos-policy] [--tag <tag>] [--no-tag] <router>",
"openstack router show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <router>",
"openstack router unset [-h] [--route destination=<subnet>,gateway=<ip-address>] [--external-gateway] [--qos-policy] [--tag <tag> | --all-tag] <router>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/router
|
1.2. Red Hat Virtualization REST API Prerequisites
|
1.2. Red Hat Virtualization REST API Prerequisites Red Hat Virtualization REST API Prerequisites A networked installation of Red Hat Virtualization Manager, which includes the REST API. A client or programming library that initiates and receives HTTP requests from the REST API. For example: Python software development kit (SDK) Java software development kit (SDK) cURL command line tool RESTClient , a debugger for RESTful web services Knowledge of Hypertext Transfer Protocol (HTTP), which is the protocol used for REST API interactions. The Internet Engineering Task Force provides a Request for Comments (RFC) explaining the Hypertext Transfer Protocol at http://www.ietf.org/rfc/rfc2616.txt . Knowledge of Extensible Markup Language (XML) or JavaScript Object Notation (JSON), which the API uses to construct resource representations. The W3C provides a full specification on XML at http://www.w3.org/TR/xml/ . ECMA International provide a free publication on JSON at http://www.ecma-international.org .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/prerequisites1
|
8.207. s390utils
|
8.207. s390utils 8.207.1. RHBA-2014:1546 - s390utils bug fix and enhancement update Updated s390utils packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The s390utils packages contain a set of user space utilities that should be used together with the zSeries (s390) Linux kernel and device drivers. Bug Fixes BZ# 1009897 Due to incorrect order of initialization, Anaconda failed to detect the presence of the zSeries Linux fibre-channel adapter (zFCP) disks. To fix this bug, the cio_settle kernel interface has been implemented, which waits for the zFCP devices to become online. Now, Anaconda detects the zFCp devices as intended. BZ# 1016181 For each possible CPU, the zfcpdump kernel consumed all memory for the per-CPU data structures. Because only 32 MB are available for zfcpdump, zfcpdump could run out of memory. This update adds a new kernel parameter "possible_cpus=1", and the zfcpdump system no longer runs out of memory. BZ# 1020364 Previously, when the "fdasd -c" command was called with a configuration file that contained only one parameter for a partition, the fdasd daemon terminated unexpectedly with a segmentation fault during configuration file parsing. This update adds a new function to parse configuration file lines, and fdasd no longer crashes in the described situation. BZ# 1094376 Previously, removing a device that was currently offline resulted in an error. The znetconf protocol has been fixed to handle removal of an offline ccwgroup devices correctly, and the "znetconf -r" command now removes a currently offline device as intended. BZ# 1107779 The output of the lsqeth command depends on the presence of a file named "?". Due to a bug in the grep command regular expression (regex), the system qeth devices failed to be detected when a file named "?" was present in the current working directory. To fix this bug, the regex argument of grep has been put in single quotes, and the system qeth devices are now detected successfully. BZ# 1109898 Due to incomplete dependencies specified in the s390utils packages, various command-line tools did not work when invoked or gathered incomplete data. This update adds the missing dependencies to the tools, which now work as expected. In addition, this update adds the following Enhancements BZ# 1017854 , BZ# 1031143 , BZ# 1032061 , BZ# 1088328 With this update, various information gathered by the dbginfo.sh utility has been expanded, along with the relevant manual page. BZ# 1053832 This update introduces a new interface that enables Linux applications such as Data Stage to access and process read-only data in physical sequential data sets owned by IBM System z without interfering with System z. By avoiding FTP or NFS transfer of data from System z, the turnaround time for batch processing is significantly reduced. Users of s390utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/s390utils
|
Chapter 6. Memory Tapset
|
Chapter 6. Memory Tapset This family of probe points is used to probe memory-related events or query the memory usage of the current process. It contains the following probe points:
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/memory_stp
|
Chapter 3. Red Hat Ceph Storage installation
|
Chapter 3. Red Hat Ceph Storage installation As a storage administrator, you can use the cephadm utility to deploy new Red Hat Ceph Storage clusters. The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations: Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node. Day One also includes deploying the Monitor and Manager daemons and adding Ceph OSDs. Day Two operations use the Ceph orchestration interface, cephadm orch , or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding other Ceph services to the storage cluster. Prerequisites At least one running virtual machine (VM) or bare-metal server with an active internet connection. Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. A valid Red Hat subscription with the appropriate entitlements. Root-level access to all nodes. An active Red Hat Network (RHN) or service account to access the Red Hat Registry. Remove troubling configurations in iptables so that refresh of iptables services does not cause issues to the cluster. For an example, refer to the Verifying firewall rules are configured for default Ceph ports section of the Red Hat Ceph Storage Configuration Guide . For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . 3.1. The cephadm utility The cephadm utility deploys and manages a Ceph storage cluster. It is tightly integrated with both the command-line interface (CLI) and the Red Hat Ceph Storage Dashboard web interface, so that you can manage storage clusters from either environment. cephadm uses SSH to connect to hosts from the manager daemon to add, remove, or update Ceph daemon containers. It does not rely on external configuration or orchestration tools such as Ansible or Rook. Note The cephadm utility is available after running the preflight playbook on a host. The cephadm utility consists of two main components: The cephadm shell. The cephadm orchestrator. The cephadm shell The cephadm shell launches a bash shell within a container. This enables you to perform "Day One" cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands. There are two ways to invoke the cephadm shell: Enter cephadm shell at the system prompt: Example At the system prompt, type cephadm shell and the command you want to execute: Example Note If the node contains configuration and keyring files in /etc/ceph/ , the container environment uses the values in those files as defaults for the cephadm shell. However, if you execute the cephadm shell on a Ceph Monitor node, the cephadm shell inherits its default configuration from the Ceph Monitor container, instead of using the default configuration. The cephadm orchestrator The cephadm orchestrator enables you to perform "Day Two" Ceph functions, such as expanding the storage cluster and provisioning Ceph daemons and services. You can use the cephadm orchestrator through either the command-line interface (CLI) or the web-based Red Hat Ceph Storage Dashboard. Orchestrator commands take the form ceph orch . The cephadm script interacts with the Ceph orchestration module used by the Ceph Manager. 3.2. How cephadm works The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. The cephadm command can perform the following operations: Bootstrap a new Red Hat Ceph Storage cluster. Launch a containerized shell that works with the Red Hat Ceph Storage command-line interface (CLI). Aid in debugging containerized daemons. The cephadm command uses ssh to communicate with the nodes in the storage cluster. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. Generate the ssh key pair during the bootstrapping process, or use your own ssh key. The cephadm bootstrapping process creates a small storage cluster on a single node, consisting of one Ceph Monitor and one Ceph Manager, as well as any required dependencies. You then use the orchestrator CLI or the Red Hat Ceph Storage Dashboard to expand the storage cluster to include nodes, and to provision all of the Red Hat Ceph Storage daemons and services. You can perform management functions through the CLI or from the Red Hat Ceph Storage Dashboard web interface. 3.3. The cephadm-ansible playbooks The cephadm-ansible package is a collection of Ansible playbooks to simplify workflows that are not covered by cephadm . After installation, the playbooks are located in /usr/share/cephadm-ansible/ . The cephadm-ansible package includes the following playbooks: cephadm-preflight.yml cephadm-clients.yml cephadm-purge-cluster.yml The cephadm-preflight playbook Use the cephadm-preflight playbook to initially setup hosts before bootstrapping the storage cluster and before adding new nodes or clients to your storage cluster. This playbook configures the Ceph repository and installs some prerequisites such as podman , lvm2 , chrony , and cephadm . The cephadm-clients playbook Use the cephadm-clients playbook to set up client hosts. This playbook handles the distribution of configuration and keyring files to a group of Ceph clients. The cephadm-purge-cluster playbook Use the cephadm-purge-cluster playbook to remove a Ceph cluster. This playbook purges a Ceph cluster managed with cephadm. Additional Resources For more information about the cephadm-preflight playbook, see Running the preflight playbook . For more information about the cephadm-clients playbook, see Running the cephadm-clients playbook . For more information about the cephadm-purge-cluster playbook, see Purging the Ceph storage cluster . 3.4. Registering the Red Hat Ceph Storage nodes to the CDN and attaching subscriptions Important When using Red Hat Enterprise Linux 8.x, the Admin node must be running a supported Red Hat Enterprise Linux 9.x version for your Red Hat Ceph Storage. For full compatibility information, see Compatibility Guide . Prerequisites At least one running virtual machine (VM) or bare-metal server with an active internet connection. Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. A valid Red Hat subscription with the appropriate entitlements. Root-level access to all nodes. Procedure Register the node, and when prompted, enter your Red Hat Customer Portal credentials: Syntax Pull the latest subscription data from the CDN: Syntax List all available subscriptions for Red Hat Ceph Storage: Syntax Identify the appropriate subscription and retrieve its Pool ID. Attach a pool ID to gain access to the software entitlements. Use the Pool ID you identified in the step. Syntax Disable the default software repositories, and then enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux: Red Hat Enterprise Linux 9 Update the system to receive the latest packages for Red Hat Enterprise Linux: Syntax Subscribe to Red Hat Ceph Storage 6 content. Follow the instructions in How to Register Ceph with Red Hat Satellite 6 . Enable the ceph-tools repository: Red Hat Enterprise Linux 9 Repeat the above steps on all nodes you are adding to the cluster. Install cephadm-ansible : Syntax 3.5. Configuring Ansible inventory location You can configure inventory location files for the cephadm-ansible staging and production environments. The Ansible inventory hosts file contains all the hosts that are part of the storage cluster. You can list nodes individually in the inventory hosts file or you can create groups such as [mons] , [osds] , and [rgws] to provide clarity to your inventory and ease the usage of the --limit option to target a group or node when running a playbook. Note If deploying clients, client nodes must be defined in a dedicated [clients] group. Prerequisites An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Procedure Navigate to the /usr/share/cephadm-ansible/ directory: Optional: Create subdirectories for staging and production: Optional: Edit the ansible.cfg file and add the following line to assign a default inventory location: Optional: Create an inventory hosts file for each environment: Open and edit each hosts file and add the nodes and [admin] group: Replace NODE_NAME_1 and NODE_NAME_2 with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes. Replace ADMIN_NODE_NAME_1 with the name of the node where the admin keyring is stored. Example Note If you set the inventory location in the ansible.cfg file to staging, you need to run the playbooks in the staging environment as follows: Syntax To run the playbooks in the production environment: Syntax 3.6. Enabling SSH login as root user on Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 9 does not support SSH login as a root user even if PermitRootLogin parameter is set to yes in the /etc/ssh/sshd_config file. You get the following error: Example You can run one of the following methods to enable login as a root user: Use "Allow root SSH login with password" flag while setting the root password during installation of Red Hat Enterprise Linux 9. Manually set the PermitRootLogin parameter after Red Hat Enterprise Linux 9 installation. This section describes manual setting of the PermitRootLogin parameter. Prerequisites Root-level access to all nodes. Procedure Open the etc/ssh/sshd_config file and set the PermitRootLogin to yes : Example Restart the SSH service: Example Login to the node as the root user: Syntax Replace HOST_NAME with the host name of the Ceph node. Example Enter the root password when prompted. Additional Resources For more information, see the Not able to login as root user via ssh in RHEL 9 server Knowledgebase solution. 3.7. Creating an Ansible user with sudo access You can create an Ansible user with password-less root access on all nodes in the storage cluster to run the cephadm-ansible playbooks. The Ansible user must be able to log into all the Red Hat Ceph Storage nodes as a user that has root privileges to install software and create configuration files without prompting for a password. Prerequisites Root-level access to all nodes. For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9 Procedure Log in to the node as the root user: Syntax Replace HOST_NAME with the host name of the Ceph node. Example Enter the root password when prompted. Create a new Ansible user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Important Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks. Set a new password for this user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Enter the new password twice when prompted. Configure sudo access for the newly created user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Assign the correct file permissions to the new file: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Repeat the above steps on all nodes in the storage cluster. Additional Resources For more information about creating user accounts, see the Getting started with managing user accounts section in the Configuring basic system settings chapter of the Red Hat Enterprise Linux 9 guide. 3.8. Configuring SSH As a storage administrator, with Cephadm, you can use an SSH key to securely authenticate with remote hosts. The SSH key is stored in the monitor to connect to remote hosts. Prerequisites An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Procedure Navigate to the cephadm-ansible directory. Generate a new SSH key: Example Retrieve the public portion of the SSH key: Example Delete the currently stored SSH key: Example Restart the mgr daemon to reload the configuration: Example 3.8.1. Configuring a different SSH user As a storage administrator, you can configure a non-root SSH user who can log into all the Ceph cluster nodes with enough privileges to download container images, start containers, and execute commands without prompting for a password. Important Prior to configuring a non-root SSH user, the cluster SSH key needs to be added to the user's authorized_keys file and non-root users must have passwordless sudo access. Prerequisites A running Red Hat Ceph Storage cluster. An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Add the cluster SSH keys to the user's authorized_keys . Enable passwordless sudo access for the non-root users. Procedure Navigate to the cephadm-ansible directory. Provide Cephadm the name of the user who is going to perform all the Cephadm operations: Syntax Example Retrieve the SSH public key. Syntax Example Copy the SSH keys to all the hosts. Syntax Example 3.9. Enabling password-less SSH for Ansible Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password. Prerequisites Access to the Ansible administration node. Ansible user with sudo access to all nodes in the storage cluster. For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9 Procedure Generate the SSH key pair, accept the default file name and leave the passphrase empty: Copy the public key to all nodes in the storage cluster: Replace USER_NAME with the new user name for the Ansible user. Replace HOST_NAME with the host name of the Ceph node. Example Create the user's SSH config file: Open for editing the config file. Set values for the Hostname and User options for each node in the storage cluster: Replace HOST_NAME with the host name of the Ceph node. Replace USER_NAME with the new user name for the Ansible user. Example Important By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command. Set the correct file permissions for the ~/.ssh/config file: Additional Resources The ssh_config(5) manual page. See Using secure communications between two systems with OpenSSH . 3.10. Running the preflight playbook This Ansible playbook configures the Ceph repository and prepares the storage cluster for bootstrapping. It also installs some prerequisites, such as podman , lvm2 , chrony , and cephadm . The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . The preflight playbook uses the cephadm-ansible inventory file to identify all the admin and nodes in the storage cluster. The default location for the inventory file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example The [admin] group in the inventory file contains the name of the node where the admin keyring is stored. On a new storage cluster, the node in the [admin] group will be the bootstrap node. To add additional admin hosts after bootstrapping the cluster see Setting up the admin node in the Installation Guide for more information. Note Run the preflight playbook before you bootstrap the initial host. Important If you are performing a disconnected installation, see Running the preflight playbook for a disconnected installation . Prerequisites Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Note In the below example, host01 is the bootstrap node. Procedure Navigate to the the /usr/share/cephadm-ansible directory. Open and edit the hosts file and add your nodes: Example Run the preflight playbook: Syntax Example After installation is complete, cephadm resides in the /usr/sbin/ directory. Use the --limit option to run the preflight playbook on a selected set of hosts in the storage cluster: Syntax Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file. Note Optionally, you can group your nodes in your inventory file by group name such as [mons] , [osds] , and [mgrs] . However, admin nodes must be added to the [admin] group and clients must be added to the [clients] group. Example When you run the preflight playbook, cephadm-ansible automatically installs chrony and ceph-common on the client nodes. The preflight playbook installs chrony but configures it for a single NTP source. If you want to configure multiple sources or if you have a disconnected environment, see the following documentation for more information: How to configure chrony? Best practices for NTP . Basic chrony NTP troubleshooting . 3.11. Bootstrapping a new storage cluster The cephadm utility performs the following tasks during the bootstrap process: Installs and starts a Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node as containers. Creates the /etc/ceph directory. Writes a copy of the public key to /etc/ceph/ceph.pub for the Red Hat Ceph Storage cluster and adds the SSH key to the root user's /root/.ssh/authorized_keys file. Applies the _admin label to the bootstrap node. Writes a minimal configuration file needed to communicate with the new cluster to /etc/ceph/ceph.conf . Writes a copy of the client.admin administrative secret key to /etc/ceph/ceph.client.admin.keyring . Deploys a basic monitoring stack with prometheus, grafana, and other tools such as node-exporter and alert-manager . Important If you are performing a disconnected installation, see Performing a disconnected installation . Note If you have existing prometheus services that you want to run with the new storage cluster, or if you are running Ceph with Rook, use the --skip-monitoring-stack option with the cephadm bootstrap command. This option bypasses the basic monitoring stack so that you can manually configure it later. Important If you are deploying a monitoring stack, see Deploying the monitoring stack using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide . Important Bootstrapping provides the default user name and password for the initial login to the Dashboard. Bootstrap requires you to change the password after you log in. Important Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm . If the two versions do not match, bootstrapping fails at the Creating initial admin user stage. Note Before you begin the bootstrapping process, you must create a username and password for the registry.redhat.io container registry. For more information about Red Hat container registry authentication, see the knowledge base article Red Hat Container Registry Authentication Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Root-level access to all nodes. Note If the storage cluster includes multiple networks and interfaces, be sure to choose a network that is accessible by any node that uses the storage cluster. Note If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Important Run cephadm bootstrap on the node that you want to be the initial Monitor node in the cluster. The IP_ADDRESS option should be the IP address of the node you are using to run cephadm bootstrap . Note If you want to deploy a storage cluster using IPV6 addresses, then use the IPV6 address format for the --mon-ip IP_ADDRESS option. For example: cephadm bootstrap --mon-ip 2620:52:0:880:225:90ff:fefc:2536 --registry-json /etc/mylogin.json Procedure Bootstrap a storage cluster: Syntax Example Note If you want internal cluster traffic routed over the public network, you can omit the --cluster-network NETWORK_CIDR option. The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. Additional Resources For more information about the recommended bootstrap command options, see Recommended cephadm bootstrap command options . For more information about the options available for the bootstrap command, see Bootstrap command options . For information about using a JSON file to contain login credentials for the bootstrap process, see Using a JSON file to protect login information . 3.11.1. Recommended cephadm bootstrap command options The cephadm bootstrap command has multiple options that allow you to specify file locations, configure ssh settings, set passwords, and perform other initial configuration tasks. Red Hat recommends that you use a basic set of command options for cephadm bootstrap . You can configure additional options after your initial cluster is up and running. The following examples show how to specify the recommended options. Syntax Example Additional Resources For more information about the --registry-json option, see Using a JSON file to protect login information . For more information about all available cephadm bootstrap options, see Bootstrap command options . For more information about bootstrapping the storage cluster as a non-root user, see Bootstrapping the storage cluster as a non-root user . 3.11.2. Using a JSON file to protect login information As a storage administrator, you might choose to add login and password information to a JSON file, and then refer to the JSON file for bootstrapping. This protects the login credentials from exposure. Note You can also use a JSON file with the cephadm --registry-login command. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Root-level access to all nodes. Procedure Create the JSON file. In this example, the file is named mylogin.json . Syntax Example Bootstrap a storage cluster: Syntax Example 3.11.3. Bootstrapping a storage cluster using a service configuration file To bootstrap the storage cluster and configure additional hosts and daemons using a service configuration file, use the --apply-spec option with the cephadm bootstrap command. The configuration file is a .yaml file that contains the service type, placement, and designated nodes for services that you want to deploy. Note If you want to use a non-default realm or zone for applications such as multi-site, configure your Ceph Object Gateway daemons after you bootstrap the storage cluster, instead of adding them to the configuration file and using the --apply-spec option. This gives you the opportunity to create the realm or zone you need for the Ceph Object Gateway daemons before deploying them. See the Red Hat Ceph Storage Operations Guide for more information. Note If deploying a NFS-Ganesha gateway, or Metadata Server (MDS) service, configure them after bootstrapping the storage cluster. To deploy a Ceph NFS-Ganesha gateway, you must create a RADOS pool first. To deploy the MDS service, you must create a CephFS volume first. See the Red Hat Ceph Storage Operations Guide for more information. Note With Red Hat Ceph Storage 6.0, if you run the bootstrap command with --apply-spec option, ensure to include the IP address of the bootstrap host in the specification file. This prevents resolving the IP address to loopback address while re-adding the bootstrap host where active Ceph Manager is already running. If you do not use the --apply spec option during bootstrap and instead use ceph orch apply command with another specification file which includes re-adding the host and contains an active Ceph Manager running, then ensure to explicitly provide the addr field. This is applicable for applying any specification file after bootstrapping. Prerequisites At least one running virtual machine (VM) or server. Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. Root-level access to all nodes. Login access to registry.redhat.io . Passwordless ssh is set up on all hosts in the storage cluster. cephadm is installed on the node that you want to be the initial Monitor node in the storage cluster. Note For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . Procedure Log in to the bootstrap host. Create the service configuration .yaml file for your storage cluster. The example file directs cephadm bootstrap to configure the initial host and two additional hosts, and it specifies that OSDs be created on all available disks. Example Bootstrap the storage cluster with the --apply-spec option: Syntax Example The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. Once your storage cluster is up and running, see the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services. Additional Resources For more information about the options available for the bootstrap command, see the Bootstrap command options . 3.11.4. Bootstrapping the storage cluster as a non-root user To bootstrap the Red Hat Ceph Storage cluster as a non-root user on the bootstrap node, use the --ssh-user option with the cephadm bootstrap command. --ssh-user specifies a user for SSH connections to cluster nodes. Non-root users must have passwordless sudo access. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the initial Monitor node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . SSH public and private keys. Passwordless sudo access to the bootstrap node. Procedure Change to sudo on the bootstrap node: Syntax Example Establish the SSH connection to the bootstrap node: Example Optional: Invoke the cephadm bootstrap command. Note Using private and public keys is optional. If SSH keys have not previously been created, these can be created during this step. Include the --ssh-private-key and --ssh-public-key options: Syntax Example Additional Resources For more information about all available cephadm bootstrap options, see Bootstrap command options . For more information about utilizing Ansible to automate bootstrapping a rootless cluster, see the knowledge base article Red Hat Ceph Storage 6 rootless deployment utilizing ansible ad-hoc commands . 3.11.5. Bootstrap command options The cephadm bootstrap command bootstraps a Ceph storage cluster on the local host. It deploys a MON daemon and a MGR daemon on the bootstrap node, automatically deploys the monitoring stack on the local host, and calls ceph orch host add HOSTNAME . The following table lists the available options for cephadm bootstrap . cephadm bootstrap option Description --config CONFIG_FILE , -c CONFIG_FILE CONFIG_FILE is the ceph.conf file to use with the bootstrap command --cluster-network NETWORK_CIDR Use the subnet defined by NETWORK_CIDR for internal cluster traffic. This is specified in CIDR notation. For example: 10.10.128.0/24 . --mon-id MON_ID Bootstraps on the host named MON_ID . Default value is the local host. --mon-addrv MON_ADDRV mon IPs (e.g., [v2:localipaddr:3300,v1:localipaddr:6789]) --mon-ip IP_ADDRESS IP address of the node you are using to run cephadm bootstrap . --mgr-id MGR_ID Host ID where a MGR node should be installed. Default: randomly generated. --fsid FSID Cluster FSID. --output-dir OUTPUT_DIR Use this directory to write config, keyring, and pub key files. --output-keyring OUTPUT_KEYRING Use this location to write the keyring file with the new cluster admin and mon keys. --output-config OUTPUT_CONFIG Use this location to write the configuration file to connect to the new cluster. --output-pub-ssh-key OUTPUT_PUB_SSH_KEY Use this location to write the public SSH key for the cluster. --skip-ssh Skip the setup of the ssh key on the local host. --initial-dashboard-user INITIAL_DASHBOARD_USER Initial user for the dashboard. --initial-dashboard-password INITIAL_DASHBOARD_PASSWORD Initial password for the initial dashboard user. --ssl-dashboard-port SSL_DASHBOARD_PORT Port number used to connect with the dashboard using SSL. --dashboard-key DASHBOARD_KEY Dashboard key. --dashboard-crt DASHBOARD_CRT Dashboard certificate. --ssh-config SSH_CONFIG SSH config. --ssh-private-key SSH_PRIVATE_KEY SSH private key. --ssh-public-key SSH_PUBLIC_KEY SSH public key. --ssh-user SSH_USER Sets the user for SSH connections to cluster hosts. Passwordless sudo is needed for non-root users. --skip-mon-network Sets mon public_network based on the bootstrap mon ip. --skip-dashboard Do not enable the Ceph Dashboard. --dashboard-password-noupdate Disable forced dashboard password change. --no-minimize-config Do not assimilate and minimize the configuration file. --skip-ping-check Do not verify that the mon IP is pingable. --skip-pull Do not pull the latest image before bootstrapping. --skip-firewalld Do not configure firewalld. --allow-overwrite Allow the overwrite of existing -output-* config/keyring/ssh files. --allow-fqdn-hostname Allow fully qualified host name. --skip-prepare-host Do not prepare host. --orphan-initial-daemons Do not create initial mon, mgr, and crash service specs. --skip-monitoring-stack Do not automatically provision the monitoring stack] (prometheus, grafana, alertmanager, node-exporter). --apply-spec APPLY_SPEC Apply cluster spec file after bootstrap (copy ssh key, add hosts and apply services). --registry-url REGISTRY_URL Specifies the URL of the custom registry to log into. For example: registry.redhat.io . --registry-username REGISTRY_USERNAME User name of the login account to the custom registry. --registry-password REGISTRY_PASSWORD Password of the login account to the custom registry. --registry-json REGISTRY_JSON JSON file containing registry login information. Additional Resources For more information about the --skip-monitoring-stack option, see Adding hosts . For more information about logging into the registry with the registry-json option, see help for the registry-login command. For more information about cephadm options, see help for cephadm . 3.11.6. Configuring a private registry for a disconnected installation You can use a disconnected installation procedure to install cephadm and bootstrap your storage cluster on a private network. A disconnected installation uses a private registry for installation. Use this procedure when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment. Follow this procedure to set up a secure private registry by using authentication and a self-signed certificate. Perform these steps on a node that has both Internet access and access to the local cluster. Note Using an insecure registry for production is not recommended. Prerequisites At least one running virtual machine (VM) or server with an active internet connection. Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. Login access to registry.redhat.io . Root-level access to all nodes. Note For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . Procedure Log in to the node that has access to both the public network and the cluster nodes. Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials: Example Pull the latest subscription data: Example List all available subscriptions for Red Hat Ceph Storage: Example Copy the Pool ID from the list of available subscriptions for Red Hat Ceph Storage. Attach the subscription to get access to the software entitlements: Syntax Replace POOL_ID with the Pool ID identified in the step. Disable the default software repositories, and enable the server and the extras repositories: Red Hat Enterprise Linux 9 Install the podman and httpd-tools packages: Example Create folders for the private registry: Example The registry will be stored in /opt/registry and the directories are mounted in the container that is running the registry. The auth directory stores the htpasswd file that the registry uses for authentication. The certs directory stores the certificates that the registry uses for authentication. The data directory stores the registry images. Create credentials for accessing the private registry: Syntax The b option provides the password from the command line. The B option stores the password using Bcrypt encryption. The c option creates the htpasswd file. Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry. Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username. Example Create a self-signed certificate: Syntax Replace LOCAL_NODE_FQDN with the fully qualified hostname of the private registry node. Note You will be prompted for the respective options for your certificate. The CN= value is the host name of your node and should be resolvable by DNS or the /etc/hosts file. Example Note When creating a self-signed certificate, be sure to create a certificate with a proper Subject Alternative Name (SAN). Podman commands that require TLS verification for certificates that do not include a proper SAN, return the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 Create a symbolic link to domain.cert to allow skopeo to locate the certificate with the file extension .cert : Example Add the certificate to the trusted list on the private registry node: Syntax Replace LOCAL_NODE_FQDN with the FQDN of the private registry node. Example Copy the certificate to any nodes that will access the private registry for installation and update the trusted list: Example Download and install the mirror registry. Download the mirror-registory from the Red Hat Hybrid Cloud Console . Install the mirror registry. Syntax On the local registry node, verify that registry.redhat.io is in the container registry search path. Open for editing the /etc/containers/registries.conf file, and add registry.redhat.io to the unqualified-search-registries list, if it does not exist: Example Login to registry.redhat.io with your Red Hat Customer Portal credentials: Syntax Copy the following Red Hat Ceph Storage 6 image, Prometheus images, and Dashboard image from the Red Hat Customer Portal to the private registry: Table 3.1. Custom image details for monitoring stack Monitoring stack component Image details Prometheus registry.redhat.io/openshift4/ose-prometheus:v4.12 Grafana registry.redhat.io/rhceph/rhceph-6-dashboard-rhel9:latest Node-exporter registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 AlertManager registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 HAProxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest Keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest SNMP Gateway registry.redhat.io/rhceph/snmp-notifier-rhel9:latest Syntax Replace CERTIFICATE_DIRECTORY_PATH with the directory path to the self-signed certificates. Replace RED_HAT_CUSTOMER_PORTAL_LOGIN and RED_HAT_CUSTOMER_PORTAL_PASSWORD with your Red Hat Customer Portal credentials. Replace PRIVATE_REGISTRY_USERNAME and PRIVATE_REGISTRY_PASSWORD with the private registry credentials. Replace SRC_IMAGE and SRC_TAG with the name and tag of the image to copy from registry.redhat.io. Replace DST_IMAGE and DST_TAG with the name and tag of the image to copy to the private registry. Replace LOCAL_NODE_FQDN with the FQDN of the private registry. Example Using the Ceph Dashboard, verify that the images are in the local registry. For more information, see Monitoring services of the Ceph cluster on the dashboard in the Red Hat Ceph Storage Dashboard guide . Additional Resources For more information on different image Ceph package versions, see the knowledge base solution for details on What are the Red Hat Ceph Storage releases and corresponding Ceph package versions? 3.11.7. Running the preflight playbook for a disconnected installation You use the cephadm-preflight.yml Ansible playbook to configure the Ceph repository and prepare the storage cluster for bootstrapping. It also installs some prerequisites, such as podman , lvm2 , chrony , and cephadm . The preflight playbook uses the cephadm-ansible inventory hosts file to identify all the nodes in the storage cluster. The default location for cephadm-ansible , cephadm-preflight.yml , and the inventory hosts file is /usr/share/cephadm-ansible/ . The following example shows the structure of a typical inventory file: Example The [admin] group in the inventory file contains the name of the node where the admin keyring is stored. Note Run the preflight playbook before you bootstrap the initial host. Prerequisites The cephadm-ansible package is installed on the Ansible administration node. Root-level access to all nodes in the storage cluster. Passwordless ssh is set up on all hosts in the storage cluster. Nodes configured to access a local YUM repository server with the following repositories enabled: rhel-9-for-x86_64-baseos-rpms rhel-9-for-x86_64-appstream-rpms rhceph-6-tools-for-rhel-9-x86_64-rpms Important When using Red Hat Enterprise Linux 8.x, the Admin node must be running a supported Red Hat Enterprise Linux 9.x version for your Red Hat Ceph Storage. For the latest supported Red Hat Enterprise Linux versions, see the Red Hat Ceph Storage Compatibility Guide . Note For more information about setting up a local YUM repository, see the knowledge base article Creating a Local Repository and Sharing with Disconnected/Offline/Air-gapped Systems Procedure Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node. Open and edit the hosts file and add your nodes. Run the preflight playbook with the ceph_origin parameter set to custom to use a local YUM repository: Syntax Example After installation is complete, cephadm resides in the /usr/sbin/ directory. Note Populate the contents of the registries.conf file with the Ansible playbook: Syntax Example Alternatively, you can use the --limit option to run the preflight playbook on a selected set of hosts in the storage cluster: Syntax Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file. Example Note When you run the preflight playbook, cephadm-ansible automatically installs chrony and ceph-common on the client nodes. 3.11.8. Performing a disconnected installation Before you can perform the installation, you must obtain a Red Hat Ceph Storage container image, either from a proxy host that has access to the Red Hat registry or by copying the image to your local registry. Note If your local registry uses a self-signed certificate with a local registry, ensure you have added the trusted root certificate to the bootstrap host. For more information, see Configuring a private registry for a disconnected installation . For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . Important Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm . If the two versions do not match, bootstrapping fails at the Creating initial admin user stage. Prerequisites At least one running virtual machine (VM) or server. Root-level access to all nodes. Passwordless ssh is set up on all hosts in the storage cluster. The preflight playbook has been run on the bootstrap host in the storage cluster. For more information, see Running the preflight playbook for a disconnected installation . A private registry has been configured and the bootstrap node has access to it. For more information, see Configuring a private registry for a disconnected installation . A Red Hat Ceph Storage container image resides in the custom registry. Procedure Log in to the bootstrap host. Bootstrap the storage cluster: Syntax Replace PRIVATE_REGISTRY_NODE_FQDN with the fully qualified domain name of your private registry. Replace CUSTOM_IMAGE_NAME and IMAGE_TAG with the name and tag of the Red Hat Ceph Storage container image that resides in the private registry. Replace IP_ADDRESS with the IP address of the node you are using to run cephadm bootstrap . Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry. Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username. Example The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. After the bootstrap process is complete, see Changing configurations of custom container images for disconnected installations to configure the container images. Additional Resources Once your storage cluster is up and running, see the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services. 3.11.9. Changing configurations of custom container images for disconnected installations After you perform the initial bootstrap for disconnected nodes, you must specify custom container images for monitoring stack daemons. You can override the default container images for monitoring stack daemons, since the nodes do not have access to the default container registry. Note Make sure that the bootstrap process on the initial host is complete before making any configuration changes. By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you can use the latest available monitoring stack component images. Note When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons. Syntax Example Prerequisites At least one running virtual machine (VM) or server. Red Hat Enterprise Linux 9.0 or later with ansible-core bundled into AppStream. Root-level access to all nodes. Passwordless ssh is set up on all hosts in the storage cluster. Note For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide . Procedure Set the custom container images with the ceph config command: Syntax Use the following options for OPTION_NAME : Example Redeploy node-exporter : Syntax Note If any of the services do not deploy, you can redeploy them with the ceph orch redeploy command. Note By setting a custom image, the default values for the configuration image name and tag will be overridden, but not overwritten. The default values change when updates become available. By setting a custom image, you will not be able to configure the component for which you have set the custom image for automatic updates. You will need to manually update the configuration image name and tag to be able to install updates. If you choose to revert to using the default configuration, you can reset the custom container image. Use ceph config rm to reset the configuration option: Syntax Example Additional Resources For more information about performing a disconnected installation, see Performing a disconnected installation . 3.12. Distributing SSH keys You can use the cephadm-distribute-ssh-key.yml playbook to distribute the SSH keys instead of creating and distributing the keys manually. The playbook distributes an SSH public key over all hosts in the inventory. You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password. Prerequisites Ansible is installed on the administration node. Access to the Ansible administration node. Ansible user with sudo access to all nodes in the storage cluster. Bootstrapping is completed. See the Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide . Procedure Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example From the Ansible administration node, distribute the SSH keys. The optional cephadm_pubkey_path parameter is the full path name of the SSH public key file on the ansible controller host. Note If cephadm_pubkey_path is not specified, the playbook gets the key from the cephadm get-pub-key command. This implies that you have at least bootstrapped a minimal cluster. Syntax Example 3.13. Launching the cephadm shell The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. This enables you to perform "Day One" cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure There are two ways to launch the cephadm shell: Enter cephadm shell at the system prompt. This example invokes the ceph -s command from within the shell. Example At the system prompt, type cephadm shell and the command you want to execute: Example Note If the node contains configuration and keyring files in /etc/ceph/ , the container environment uses the values in those files as defaults for the cephadm shell. If you execute the cephadm shell on a MON node, the cephadm shell inherits its default configuration from the MON container, instead of using the default configuration. 3.14. Verifying the cluster installation Once the cluster installation is complete, you can verify that the Red Hat Ceph Storage 6 installation is running properly. There are two ways of verifying the storage cluster installation as a root user: Run the podman ps command. Run the cephadm shell ceph -s . Prerequisites Root-level access to all nodes in the storage cluster. Procedure Run the podman ps command: Example Note In Red Hat Ceph Storage 6, the format of the systemd units has changed. In the NAMES column, the unit files now include the FSID . Run the cephadm shell ceph -s command: Example Note The health of the storage cluster is in HEALTH_WARN status as the hosts and the daemons are not added. 3.15. Adding hosts Bootstrapping the Red Hat Ceph Storage installation creates a working storage cluster, consisting of one Monitor daemon and one Manager daemon within the same container. As a storage administrator, you can add additional hosts to the storage cluster and configure them. Note Running the preflight playbook installs podman , lvm2 , chrony , and cephadm on all hosts listed in the Ansible inventory file. When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons. Prerequisites A running Red Hat Ceph Storage cluster. Root-level or user with sudo access to all nodes in the storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Procedure + Note In the following procedure, use either root , as indicated, or the username with which the user is bootstrapped. From the node that contains the admin keyring, install the storage cluster's public SSH key in the root user's authorized_keys file on the new host: Syntax Example Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node. Example From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example Note If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 4. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chrony , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. From the bootstrap node, use the cephadm orchestrator to add the new host to the storage cluster: Syntax Example Optional: You can also add nodes by IP address, before and after you run the preflight playbook. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names. Syntax Example Verification View the status of the storage cluster and verify that the new host has been added. The STATUS of the hosts is blank, in the output of the ceph orch host ls command. Example Additional Resources See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide . 3.15.1. Using the addr option to identify hosts The addr option offers an additional way to contact a host. Add the IP address of the host to the addr option. If ssh cannot connect to the host by its hostname, then it uses the value stored in addr to reach the host by its IP address. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Run this procedure from inside the cephadm shell. Add the IP address: Syntax Example Note If adding a host by hostname results in that host being added with an IPv6 address instead of an IPv4 address, use ceph orch host to specify the IP address of that host: To convert the IP address from IPv6 format to IPv4 format for a host you have added, use the following command: 3.15.2. Adding multiple hosts Use a YAML file to add multiple hosts to the storage cluster at the same time. Note Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt . If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Copy over the public ssh key to each of the hosts that you want to add. Use a text editor to create a hosts.yaml file. Add the host descriptions to the hosts.yaml file, as shown in the following example. Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---). Example If you created the hosts.yaml file within the host container, invoke the ceph orch apply command: Example If you created the hosts.yaml file directly on the local host, use the cephadm shell to mount the file: Example View the list of hosts and their labels: Example Note If a host is online and operating normally, its status is blank. An offline host shows a status of OFFLINE, and a host in maintenance mode shows a status of MAINTENANCE. 3.15.3. Adding hosts in disconnected deployments If you are running a storage cluster on a private network and your host domain name server (DNS) cannot be reached through private IP, you must include both the host name and the IP address for each host you want to add to the storage cluster. Prerequisites A running storage cluster. Root-level access to all hosts in the storage cluster. Procedure Invoke the cephadm shell. Syntax Add the host: Syntax Example 3.15.4. Removing hosts You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete. Important If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the storage cluster. All the services are deployed. Cephadm is deployed on the nodes where the services have to be removed. Procedure Log into the Cephadm shell: Example Fetch the host details: Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Check if all the daemons are removed from the storage cluster: Syntax Example Remove the host: Syntax Example Additional Resources See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 3.16. Labeling hosts The Ceph orchestrator supports assigning labels to hosts. Labels are free-form and have no specific meanings. This means that you can use mon , monitor , mycluster_monitor , or any other text string. Each host can have multiple labels. For example, apply the mon label to all hosts on which you want to deploy Ceph Monitor daemons, mgr for all hosts on which you want to deploy Ceph Manager daemons, rgw for Ceph Object Gateway daemons, and so on. Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels. 3.16.1. Adding a label to a host Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons. A few examples of labels are mgr , mon , and osd based on the service deployed on the hosts. Each host can have multiple labels. You can also add the following host labels that have special meaning to cephadm and they begin with _ : _no_schedule : This label prevents cephadm from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causes cephadm to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the _no_schedule label, no daemons are deployed on it. When the daemons are drained before the host is removed, the _no_schedule label is set on that host. _no_autotune_memory : This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled for one or more daemons on that host. _admin : By default, the _admin label is applied to the bootstrapped host in the storage cluster and the client.admin key is set to be distributed to that host with the ceph orch client-keyring {ls|set|rm} function. Adding this label to additional hosts normally causes cephadm to deploy configuration and keyring files in the /etc/ceph directory. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Hosts are added to the storage cluster. Procedure Log in to the Cephadm shell: Example Add a label to a host: Syntax Example Verification List the hosts: Example 3.16.2. Removing a label from a host You can use the Ceph orchestrator to remove a label from a host. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Launch the cephadm shell: Remove the label. Syntax Example Verification List the hosts: Example 3.16.3. Using host labels to deploy daemons on specific hosts You can use host labels to deploy daemons to specific hosts. There are two ways to use host labels to deploy daemons on specific hosts: By using the --placement option from the command line. By using a YAML file. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Log into the Cephadm shell: Example List current hosts and labels: Example Method 1 : Use the --placement option to deploy a daemon from the command line: Syntax Example Method 2 To assign the daemon to a specific host label in a YAML file, specify the service type and label in the YAML file: Create the placement.yml file: Example Specify the service type and label in the placement.yml file: Example Apply the daemon placement file: Syntax Example Verification List the status of the daemons: Syntax Example 3.17. Adding Monitor service A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes. Note In the case of a firewall, see the Firewall settings for Ceph Monitor node section of the Red Hat Ceph Storage Configuration Guide for details. Note The bootstrap node is the initial monitor of the storage cluster. Be sure to include the bootstrap node in the list of hosts to which you want to deploy. Note If you want to apply Monitor service to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mon --placement host1 and then specify ceph orch apply mon --placement host2 , the second command removes the Monitor service on host1 and applies a Monitor service to host2. If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new hosts to the cluster. cephadm automatically configures the Monitor daemons on the new hosts. The new hosts reside on the same subnet as the first (bootstrap) host in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Prerequisites Root-level access to all hosts in the storage cluster. A running storage cluster. Procedure Apply the five Monitor daemons to five random hosts in the storage cluster: Disable automatic Monitor deployment: 3.17.1. Adding Monitor nodes to specific hosts Use host labels to identify the hosts that contain Monitor nodes. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure Assign the mon label to the host: Syntax Example View the current hosts and labels: Syntax Example Deploy monitors based on the host label: Syntax Deploy monitors on a specific set of hosts: Syntax Example Note Be sure to include the bootstrap node in the list of hosts to which you want to deploy. 3.18. Setting up the admin node Use an admin node to administer the storage cluster. An admin node contains both the cluster configuration file and the admin keyring. Both of these files are stored in the directory /etc/ceph and use the name of the storage cluster as a prefix. For example, the default ceph cluster name is ceph . In a cluster using the default name, the admin keyring is named /etc/ceph/ceph.client.admin.keyring . The corresponding cluster configuration file is named /etc/ceph/ceph.conf . To set up additional hosts in the storage cluster as admin nodes, apply the _admin label to the host you want to designate as an administrator node. Note By default, after applying the _admin label to a node, cephadm copies the ceph.conf and client.admin keyring files to that node. The _admin label is automatically applied to the bootstrap node unless the --skip-admin-label option was specified with the cephadm bootstrap command. Prerequisites A running storage cluster with cephadm installed. The storage cluster has running Monitor and Manager nodes. Root-level access to all nodes in the cluster. Procedure Use ceph orch host ls to view the hosts in your storage cluster: Example Use the _admin label to designate the admin host in your storage cluster. For best results, this host should have both Monitor and Manager daemons running. Syntax Example Verify that the admin host has the _admin label. Example Log in to the admin node to manage the storage cluster. 3.18.1. Deploying Ceph monitor nodes using host labels A typical Red Hat Ceph Storage storage cluster has three or five Ceph Monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Ceph Monitor nodes. If your Ceph Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Ceph Monitor daemons as you add new nodes to the cluster. cephadm automatically configures the Ceph Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first (bootstrap) node in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Note Use host labels to identify the hosts that contain Ceph Monitor nodes. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure Assign the mon label to the host: Syntax Example View the current hosts and labels: Syntax Example Deploy Ceph Monitor daemons based on the host label: Syntax Deploy Ceph Monitor daemons on a specific set of hosts: Syntax Example Note Be sure to include the bootstrap node in the list of hosts to which you want to deploy. 3.18.2. Adding Ceph Monitor nodes by IP address or network name A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes. If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new nodes to the cluster. You do not need to configure the Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first node in the storage cluster. The first node in the storage cluster is the bootstrap node. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure To deploy each additional Ceph Monitor node: Syntax Example 3.19. Adding Manager service cephadm automatically installs a Manager daemon on the bootstrap node during the bootstrapping process. Use the Ceph orchestrator to deploy additional Manager daemons. The Ceph orchestrator deploys two Manager daemons by default. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph orchestrator randomly selects the hosts and deploys the Manager daemons to them. Note If you want to apply Manager daemons to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mgr --placement host1 and then specify ceph orch apply mgr --placement host2 , the second command removes the Manager daemon on host1 and applies a Manager daemon to host2. Red Hat recommends that you use the --placement option to deploy to specific hosts. Prerequisites A running storage cluster. Procedure To specify that you want to apply a certain number of Manager daemons to randomly selected hosts: Syntax Example To add Manager daemons to specific hosts in your storage cluster: Syntax Example 3.20. Adding OSDs Cephadm will not provision an OSD on a device that is not available. A storage device is considered available if it meets all of the following conditions: The device must have no partitions. The device must not be mounted. The device must not contain a file system. The device must not contain a Ceph BlueStore OSD. The device must be larger than 5 GB. Important By default, the osd_memory_target_autotune parameter is set to true in Red Hat Ceph Storage 6.0. For more information about tuning OSD memory, see the Automatically tuning OSD memory section in the Red Hat Ceph Storage Operations Guide . Prerequisites A running Red Hat Ceph Storage cluster. Procedure List the available devices to deploy OSDs: Syntax Example You can either deploy the OSDs on specific hosts or on all the available devices: To create an OSD from a specific device on a specific host: Syntax Example To deploy OSDs on any available and unused devices, use the --all-available-devices option. Example Note This command creates colocated WAL and DB daemons. If you want to create non-colocated daemons, do not use this command. Additional Resources For more information about drive specifications for OSDs, see the Advanced service specifications and filters for deploying OSDs section in the Red Hat Ceph Storage Operations Guide . For more information about zapping devices to clear data on devices, see the Zapping devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 3.21. Running the cephadm-clients playbook The cephadm-clients.yml playbook handles the distribution of configuration and admin keyring files to a group of Ceph clients. Note If you do not specify a configuration file when you run the playbook, the playbook will generate and distribute a minimal configuration file. By default, the generated file is located at /etc/ceph/ceph.conf . Note If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes. For more information, see Upgrading the Red Hat Ceph Storage cluster section in the Red Hat Ceph Storage Upgrade Guide . Prerequisites Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. The cephadm-ansible package is installed. The preflight playbook has been run on the initial host in the storage cluster. For more information, see Running the preflight playbook . The client_group variable must be specified in the Ansible inventory file. The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring . Procedure Navigate to the /usr/share/cephadm-ansible directory. Run the cephadm-clients.yml playbook on the initial host in the group of clients. Use the full path name to the admin keyring on the admin host for PATH_TO_KEYRING . Optional: If you want to specify an existing configuration file to use, specify the full path to the configuration file for CONFIG-FILE . Use the Ansible group name for the group of clients for ANSIBLE_GROUP_NAME . Use the FSID of the cluster where the admin keyring and configuration files are stored for FSID . The default path for the FSID is /var/lib/ceph/ . Syntax Example After installation is complete, the specified clients in the group have the admin keyring. If you did not specify a configuration file, cephadm-ansible creates a minimal default configuration file on each client. Additional Resources For more information about admin keys, see the Ceph User Management section in the Red Hat Ceph Storage Administration Guide . 3.22. Managing operating system tuning profiles with cephadm As a storage administrator, you can use cephadm to create and manage operating system tuning profiles that apply a set of sysctl settings to a given set of hosts in your Red Hat Ceph Storage cluster. Tuning the operating system gives you extra opportunities for better performance of your Red Hat Ceph Storage cluster. Additional Resources For more information about configuring kernel parameters, see the sysctl(8) man page. For more information about tuned profiles, see Customizing TuneD profiles . 3.22.1. Creating tuning profiles You can create a tuning profile by creating a YAML specification file with kernel parameters or by defining kernel parameter settings using the orchestrator CLI. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to an admin host. Installation of the tuned package. Method 1: Create a tuning profile by creating and applying a YAML specification: From a Ceph admin host, create a YAML specification file: Syntax Example Edit the YAML file to include the tuning parameters: Syntax Example Apply the tuning profile: Syntax Example This example writes the profile to /etc/sysctl.d/ on host01 and host02 and runs sysctl --system on each host to reload sysctl variables without rebooting. Note Cephadm writes the profile file name under /etc/sysctl.d/ as TUNED_PROFILE_NAME -cephadm-tuned-profile.conf where TUNED_PROFILE_NAME is the profile_name you specify in the provided YAML specification. The sysctl command applies settings in lexicographical order by the file name the setting occurs in. If multiple files contain the same setting, the entry in the file with the lexicographically latest name will take precedence. To ensure you apply settings before or after other configuration files that may exist, set the profile_name in your specification file accordingly. Note Cephadm applies sysctl settings only at the host level and not to any certain daemon or container. Method 2: Create a tuning profile by using the orchestrator CLI: From a Ceph admin host, specify the tuning profile name, placement, and settings: Syntax Example Verification List the tuning profiles that cephadm is managing: Example 3.22.2. Viewing tuning profiles You can view all the tuning profiles that cephadm manages by running the tuned-profile ls command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to an admin host. Installation of the tuned package. Procedure From a Ceph admin host, list the tuning profiles: Syntax Example Note If you need to make modifications and re-apply a profile, passing the --format yaml parameter to the tuned-profile ls command will present the profiles in a format that you can copy and re-apply. Example 3.22.3. Modifying tuning profiles After you create tuning profiles, you can modify the exiting tuning profiles to adjust sysctl settings when needed. You can modify existing tuning profiles in two ways: Re-apply a YAML specification with the same profile name. Use the tuned-profile add-setting and rm-setting parameters to adjust a setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to an admin host. Installation of the tuned package. Method 1: Modify a setting using the tuned-profile add-setting and rm-setting parameters: From a Ceph admin host, add or modify a setting for an existing profile: Syntax Example To remove a setting from an existing profile: Syntax Example Method 2: Modify a setting by re-applying a YAML specification with the same profile name: From a Ceph admin host, create the YAML specification file or modify an existing specification file: Syntax Example Edit the YAML file to include the tuned parameters you want to modify: Syntax Example Apply the tuning profile: Syntax Example Note Modifying the placement will require re-applying a profile with the same name. Cephadm tracks profiles by their name, therefore applying a profile with the same name as an existing profile, results in the old profile being overwritten. 3.22.4. Removing tuning profiles As a storage administrator, you can remove tuning profiles that you no longer want cephadm to manage, with the tuned-profile rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to an admin host. Installation of the tuned package. Procedure From a Ceph admin host, view the tuning profiles that cephadm is managing: Example Remove the tuning profile: Syntax Example When cephadm removes a tuning profile, it will remove the profile file previously written to the /etc/sysctl.d directory on the corresponding host. 3.23. Purging the Ceph storage cluster Purging the Ceph storage cluster clears any data or connections that remain from deployments on your server. For Red Hat Enterprise Linux 8, this Ansible script removes all daemons, logs, and data that belong to the FSID passed to the script from all hosts in the storage cluster. For Red Hat Enterprise Linux 9, use the cephadm rm-cluster command since Ansible is not supported. For Red Hat Enterprise Linux 8 The Ansible inventory file lists all the hosts in your cluster and what roles each host plays in your Ceph storage cluster. The default location for an inventory file is /usr/share/cephadm-ansible/hosts , but this file can be placed anywhere. Important This process works only if the cephadm binary is installed on all hosts in the storage cluster. The following example shows the structure of an inventory file: Example Prerequisites A running Red Hat Ceph Storage cluster. Ansible 2.12 or later is installed on the bootstrap node. Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring . Procedure As an Ansible user on the bootstrap node, run the purge script: Syntax Example Note An additional extra-var ( -e ceph_origin=rhcs ) is required to zap the disk devices during the purge. When the script has completed, the entire storage cluster, including all OSD disks, will have been removed from all hosts in the cluster. For Red Hat Enterprise Linux 9 Prerequisites A running Red Hat Ceph Storage cluster. Procedure Disable cephadm to stop all the orchestration operations to avoid deploying new daemons: Example Get the FSID of the cluster: Example Exit the cephadm shell. Example Purge the Ceph daemons from all hosts in the cluster: Syntax Example
|
[
"cephadm shell ceph -s",
"cephadm shell ceph -s",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches ' Red Hat Ceph Storage '",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf update",
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms",
"dnf install cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"mkdir -p inventory/staging inventory/production",
"[defaults] inventory = ./inventory/staging",
"touch inventory/staging/hosts touch inventory/production/hosts",
"NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml",
"ansible-playbook -i inventory/production/hosts PLAYBOOK.yml",
"ssh root@myhostname root@myhostname password: Permission denied, please try again.",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf",
"systemctl restart sshd.service",
"ssh root@ HOST_NAME",
"ssh root@host01",
"ssh root@ HOST_NAME",
"ssh root@host01",
"adduser USER_NAME",
"adduser ceph-admin",
"passwd USER_NAME",
"passwd ceph-admin",
"cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF",
"cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF",
"chmod 0440 /etc/sudoers.d/ USER_NAME",
"chmod 0440 /etc/sudoers.d/ceph-admin",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key",
"[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key",
"[ceph-admin@admin cephadm-ansible]USD ceph mgr fail",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user",
"ceph cephadm get-pub-key > ~/ceph.pub",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ceph.pub USER @ HOST",
"[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01",
"[ceph-admin@admin ~]USD ssh-keygen",
"ssh-copy-id USER_NAME @ HOST_NAME",
"[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01",
"[ceph-admin@admin ~]USD touch ~/.ssh/config",
"Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME",
"Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin",
"[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config",
"host02 host03 host04 [admin] host01",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01",
"cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know",
"cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON",
"cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json",
"{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }",
"{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }",
"cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json",
"cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json",
"service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"su - SSH_USER_NAME",
"su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0",
"ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0",
"cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --all --matches=\"*Ceph*\"",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf install -y podman httpd-tools",
"mkdir -p /opt/registry/{auth,certs,data}",
"htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD",
"htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"",
"ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"./mirror-registry install --sslKey /opt/registry/certs/domain.key --sslCert /opt/registry/certs/domain.crt --initUser myregistryuser --initPassword myregistrypass",
"unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]",
"login registry.redhat.io",
"run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :8433/ DST_IMAGE : DST_TAG",
"podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-6-rhel9:latest docker://admin.lab.redhat.com:8433/rhceph/rhceph-6-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 docker://admin.lab.redhat.com:8433/openshift4/ose-prometheus-node-exporter:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-6-dashboard-rhel9:latest docker://admin.lab.redhat.com:8433/rhceph/rhceph-6-dashboard-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.12 docker://admin.lab.redhat.com:8433/openshift4/ose-prometheus:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 docker://admin.lab.redhat.com:8433/openshift4/ose-prometheus-alertmanager:v4.12",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"",
"ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry= REGISTRY_URL",
"ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02",
"cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD",
"cephadm --image admin.lab.redhat.com:5000/rhceph-6-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD",
"ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1",
"ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME",
"container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter",
"ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer",
"ceph orch redeploy node-exporter",
"ceph config rm mgr mgr/cephadm/ OPTION_NAME",
"ceph config rm mgr mgr/cephadm/container_image_prometheus",
"[ansible@admin ~]USD cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1",
"[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01",
"cephadm shell ceph -s",
"cephadm shell ceph -s",
"podman ps",
"cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr",
".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----",
".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----",
"ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"ceph orch host add NEWHOST",
"ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'",
"ceph orch host add HOSTNAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'",
"ceph orch host ls",
"ceph orch host add HOSTNAME IP_ADDR",
"ceph orch host add host01 10.10.128.68",
"ceph orch host set-addr HOSTNAME IP_ADDR",
"ceph orch host set-addr HOSTNAME IPV4_ADDRESS",
"service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd",
"ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'",
"cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml",
"ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd",
"cephadm shell",
"ceph orch host add HOST_NAME HOST_ADDRESS",
"ceph orch host add host03 10.10.128.70",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel",
"ceph orch apply DAEMON --placement=\"label: LABEL \"",
"ceph orch apply prometheus --placement=\"label:mylabel\"",
"vi placement.yml",
"service_type: prometheus placement: label: \"mylabel\"",
"ceph orch apply -i FILENAME",
"ceph orch apply -i placement.yml Scheduled prometheus update...",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0",
"ceph orch apply mon 5",
"ceph orch apply mon --unmanaged",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host01 mon",
"ceph orch host ls",
"ceph orch host label add host02 mon ceph orch host label add host03 mon ceph orch host ls HOST ADDR LABELS STATUS host01 mon host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06",
"ceph orch host label add HOSTNAME _admin",
"ceph orch host label add host03 _admin",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host02 mon ceph orch host label add host03 mon",
"ceph orch host ls",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]",
"ceph orch apply mon host02:10.10.128.69 host03:mynetwork",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"",
"ceph orch apply mgr --placement \"host02 host03 host04\"",
"ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{\"fsid\":\" FSID \", \"client_group\":\" ANSIBLE_GROUP_NAME \", \"keyring\":\" PATH_TO_KEYRING \", \"conf\":\" CONFIG_FILE \"}'",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"be3ca2b2-27db-11ec-892b-005056833d58\",\"client_group\":\"fs_clients\",\"keyring\":\"/etc/ceph/fs.keyring\", \"conf\": \"/etc/ceph/ceph.conf\"}'",
"touch TUNED_PROFILE_NAME .yaml",
"touch mon_hosts_profile.yaml",
"profile_name: PROFILE_NAME placement: hosts: - HOST1 - HOST2 settings: SYSCTL_PARAMETER : SYSCTL_PARAMETER_VALUE",
"profile_name: mon_hosts_profile placement: hosts: - host01 - host02 settings: fs.file-max: 1000000 vm.swappiness: 13",
"ceph orch tuned-profile apply -i TUNED_PROFILE_NAME .yaml",
"ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profile",
"ceph orch tuned-profile apply PROFILE_NAME --placement=' HOST1 , HOST2 ' --settings=' SETTING_NAME1 = VALUE1 , SETTING_NAME2 = VALUE2 '",
"ceph orch tuned-profile apply osd_hosts_profile --placement='host04,host05' --settings='fs.file-max=200000,vm.swappiness=19' Saved tuned profile osd_hosts_profile",
"ceph orch tuned-profile ls profile_name: osd_hosts_profile placement: host04;host05 settings: fs.file-max: 200000 vm.swappiness: 19",
"ceph orch tuned-profile ls",
"ceph orch tuned-profile ls profile_name: osd_hosts_profile placement: host04;host05 settings: fs.file-max: 200000 vm.swappiness: 19 --- profile_name: mon_hosts_profile placement: host01;host02 settings: fs.file-max: 1000000 vm.swappiness: 13",
"ceph orch tuned-profile ls --format yaml placement: hosts: - host01 - host02 profile_name: mon_hosts_profile settings: vm.swappiness: '13' fs.file-max: 1000000",
"ceph orch tuned-profile add-setting PROFILE_NAME SETTING_NAME VALUE",
"ceph orch tuned-profile add-setting mon_hosts_profile vm.vfs_cache_pressure 110 Added setting vm.vfs_cache_pressure with value 110 to tuned profile mon_hosts_profile",
"ceph orch tuned-profile rm-setting PROFILE_NAME SETTING_NAME",
"ceph orch tuned-profile rm-setting mon_hosts_profile vm.vfs_cache_pressure Removed setting vm.vfs_cache_pressure from tuned profile mon_hosts_profile",
"vi TUNED_PROFILE_NAME .yaml",
"vi mon_hosts_profile.yaml",
"profile_name: PROFILE_NAME placement: hosts: - HOST1 - HOST2 settings: SYSCTL_PARAMETER : SYSCTL_PARAMETER_VALUE",
"profile_name: mon_hosts_profile placement: hosts: - host01 - host02 settings: fs.file-max: 2000000 vm.swappiness: 15",
"ceph orch tuned-profile apply -i TUNED_PROFILE_NAME .yaml",
"ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profile",
"ceph orch tuned-profile ls",
"ceph orch tuned-profile rm TUNED_PROFILE_NAME",
"ceph orch tuned-profile rm mon_hosts_profile Removed tuned profile mon_hosts_profile",
"host02 host03 host04 [admin] host01 [clients] client01 client02 client03",
"ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid= FSID -vvv",
"[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=a6ca415a-cde7-11eb-a41a-002590fc2544 -vvv",
"ceph mgr module disable cephadm",
"ceph fsid",
"exit",
"cephadm rm-cluster --force --zap-osds --fsid FSID",
"cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/installation_guide/red-hat-ceph-storage-installation
|
Chapter 9. Logging
|
Chapter 9. Logging 9.1. Configuring logging AMQ JavaScript uses the JavaScript debug module to implement logging. For example, to enable detailed client logging, set the DEBUG environment variable to rhea* : Example: Enabling detailed logging USD export DEBUG=rhea* USD <your-client-program> 9.2. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the DEBUG environment variable to rhea:frames : Example: Enabling protocol logging USD export DEBUG=rhea:frames USD <your-client-program>
|
[
"export DEBUG=rhea* <your-client-program>",
"export DEBUG=rhea:frames <your-client-program>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/logging
|
Chapter 16. ReplicaSet [apps/v1]
|
Chapter 16. ReplicaSet [apps/v1] Description ReplicaSet ensures that a specified number of pod replicas are running at any given time. Type object 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ReplicaSetSpec is the specification of a ReplicaSet. status object ReplicaSetStatus represents the current status of a ReplicaSet. 16.1.1. .spec Description ReplicaSetSpec is the specification of a ReplicaSet. Type object Required selector Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller selector LabelSelector Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template 16.1.2. .status Description ReplicaSetStatus represents the current status of a ReplicaSet. Type object Required replicas Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this replica set. conditions array Represents the latest available observations of a replica set's current state. conditions[] object ReplicaSetCondition describes the state of a replica set at a certain point. fullyLabeledReplicas integer The number of pods that have labels matching the labels of the pod template of the replicaset. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed ReplicaSet. readyReplicas integer readyReplicas is the number of pods targeted by this ReplicaSet with a Ready Condition. replicas integer Replicas is the most recently observed number of replicas. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller 16.1.3. .status.conditions Description Represents the latest available observations of a replica set's current state. Type array 16.1.4. .status.conditions[] Description ReplicaSetCondition describes the state of a replica set at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of replica set condition. 16.2. API endpoints The following API endpoints are available: /apis/apps/v1/replicasets GET : list or watch objects of kind ReplicaSet /apis/apps/v1/watch/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets DELETE : delete collection of ReplicaSet GET : list or watch objects of kind ReplicaSet POST : create a ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} DELETE : delete a ReplicaSet GET : read the specified ReplicaSet PATCH : partially update the specified ReplicaSet PUT : replace the specified ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} GET : watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status GET : read status of the specified ReplicaSet PATCH : partially update status of the specified ReplicaSet PUT : replace status of the specified ReplicaSet 16.2.1. /apis/apps/v1/replicasets Table 16.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ReplicaSet Table 16.2. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty 16.2.2. /apis/apps/v1/watch/replicasets Table 16.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 16.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.3. /apis/apps/v1/namespaces/{namespace}/replicasets Table 16.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ReplicaSet Table 16.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 16.8. Body parameters Parameter Type Description body DeleteOptions schema Table 16.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ReplicaSet Table 16.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.11. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ReplicaSet Table 16.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.13. Body parameters Parameter Type Description body ReplicaSet schema Table 16.14. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 202 - Accepted ReplicaSet schema 401 - Unauthorized Empty 16.2.4. /apis/apps/v1/watch/namespaces/{namespace}/replicasets Table 16.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 16.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 16.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.5. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} Table 16.18. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ReplicaSet Table 16.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 16.21. Body parameters Parameter Type Description body DeleteOptions schema Table 16.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ReplicaSet Table 16.23. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ReplicaSet Table 16.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 16.25. Body parameters Parameter Type Description body Patch schema Table 16.26. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ReplicaSet Table 16.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.28. Body parameters Parameter Type Description body ReplicaSet schema Table 16.29. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty 16.2.6. /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} Table 16.30. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 16.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 16.2.7. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status Table 16.33. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 16.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ReplicaSet Table 16.35. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ReplicaSet Table 16.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 16.37. Body parameters Parameter Type Description body Patch schema Table 16.38. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ReplicaSet Table 16.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.40. Body parameters Parameter Type Description body ReplicaSet schema Table 16.41. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/replicaset-apps-v1
|
Chapter 19. Managing cloud provider credentials
|
Chapter 19. Managing cloud provider credentials 19.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 19.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual mode with long-term credentials for components : In manual mode, you can manage long-term cloud credentials instead of the CCO. Manual mode with short-term credentials for components : For some providers, you can use the CCO utility ( ccoctl ) during installation to implement short-term credentials for individual components. These credentials are created and managed outside the OpenShift Container Platform cluster. Table 19.1. CCO mode support matrix Cloud provider Mint Passthrough Manual with long-term credentials Manual with short-term credentials Alibaba Cloud X [1] Amazon Web Services (AWS) X X X X Global Microsoft Azure X X X Microsoft Azure Stack Hub X Google Cloud Platform (GCP) X X X X IBM Cloud(R) X [1] Nutanix X [1] Red Hat OpenStack Platform (RHOSP) X VMware vSphere X This platform uses the ccoctl utility during installation to configure long-term credentials. 19.1.2. Determining the Cloud Credential Operator mode For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI. Figure 19.1. Determining the CCO configuration 19.1.2.1. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret: Navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials To view the CCO mode that the cluster is using, click 1 annotation under Annotations , and check the value field. The following values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, run the following command: USD oc get secret <secret_name> \ -n kube-system \ -o jsonpath \ --template '{ .metadata.annotations }' where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.3. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installation program fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can configure your cluster to use a different CCO mode that is supported for your cloud provider. 19.1.4. Additional resources Cluster Operators reference page for the Cloud Credential Operator 19.2. The Cloud Credential Operator in mint mode Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. Mint mode supports Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters. 19.2.1. Mint mode credentials management For clusters that use the CCO in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. With mint mode, each cluster component has only the specific permissions it requires. Cloud credential reconciliation is automatic and continuous so that components can perform actions that require additional credentials or permissions. For example, a minor version cluster update (such as updating from OpenShift Container Platform 4.16 to 4.17) might include an updated CredentialsRequest resource for a cluster component. The CCO, operating in mint mode, uses the admin credential to process the CredentialsRequest resource and create users with limited permissions to satisfy the updated authentication requirements. Note By default, mint mode requires storing the admin credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, you can remove the credential after installing the cluster . 19.2.1.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. The credential you provide for mint mode in Amazon Web Services (AWS) must have the following permissions: Example 19.1. Required AWS permissions iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy The credential you provide for mint mode in Google Cloud Platform (GCP) must have the following permissions: Example 19.2. Required GCP permissions resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.list iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.create iam.roles.get iam.roles.list iam.roles.undelete iam.roles.update resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 19.2.1.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 19.2.2. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . Delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. 19.2.3. Additional resources Removing cloud provider credentials 19.3. The Cloud Credential Operator in passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. Note Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 19.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for AWS . 19.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for Azure . 19.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for GCP . 19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 19.3.1.5. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 19.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 19.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 19.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.3.1. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 19.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.5. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP 19.4. Manual mode with long-term credentials for components Manual mode is supported for Alibaba Cloud, Amazon Web Services (AWS), global Microsoft Azure, Microsoft Azure Stack Hub, Google Cloud Platform (GCP), IBM Cloud(R), and Nutanix. 19.4.1. User-managed credentials In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Some platforms use the CCO utility ( ccoctl ) to facilitate this process during installation and updates. Using manual mode with long-term credentials allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to services such as the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider. Note An AWS, global Azure, or GCP cluster that uses manual mode might be configured to use short-term credentials for different components. For more information, see Manual mode with short-term credentials for components . 19.4.2. Additional resources Manually creating RAM resources for Alibaba Cloud Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP Configuring IAM for IBM Cloud(R) Configuring IAM for Nutanix Manual mode with short-term credentials for components Preparing to update a cluster with manually maintained credentials 19.5. Manual mode with short-term credentials for components During installation, you can configure the Cloud Credential Operator (CCO) to operate in manual mode and use the CCO utility ( ccoctl ) to implement short-term security credentials for individual components that are created and managed outside the OpenShift Container Platform cluster. Note This credentials strategy is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), and global Microsoft Azure only. For AWS and GCP clusters, you must configure your cluster to use this strategy during installation of a new OpenShift Container Platform cluster. You cannot configure an existing AWS or GCP cluster that uses a different credentials strategy to use this feature. If you did not configure your Azure cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster . Cloud providers use different terms for their implementation of this authentication method. Table 19.3. Short-term credentials provider terminology Cloud provider Provider nomenclature Amazon Web Services (AWS) AWS Security Token Service (STS) Google Cloud Platform (GCP) GCP Workload Identity Global Microsoft Azure Microsoft Entra Workload ID 19.5.1. AWS Security Token Service In manual mode with STS, the individual OpenShift Container Platform cluster components use the AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. Additional resources Configuring an AWS cluster to use short-term credentials 19.5.1.1. AWS Security Token Service authentication process The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access keys that are defined by an IAM role policy. The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a service account token requests one through the pod specification. When the pod is created and assigned to a node, the node retrieves a signed service account from the service account signing service and mounts it onto the pod. Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads assume the identity of this IAM role ID. The signed service account token issued to the workload aligns with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to the workload. AWS STS grants access keys only for requests that include service account tokens that meet the following conditions: The token name and namespace match the service account name and namespace. The token is signed by a key that matches the public key. The public key pair for the service account signing key used by the cluster is stored in an AWS S3 bucket. AWS STS federation validates that the service account token signature aligns with the public key stored in the S3 bucket. 19.5.1.1.1. Authentication flow for AWS STS The following diagram illustrates the authentication flow between AWS and the OpenShift Container Platform cluster when using AWS STS. Token signing is the Kubernetes service account signing service on the OpenShift Container Platform cluster. The Kubernetes service account in the pod is the signed service account token. Figure 19.2. AWS Security Token Service authentication flow Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. 19.5.1.1.2. Token refreshing for AWS STS The signed service account token that a pod uses expires after a period of time. For clusters that use AWS STS, this time period is 3600 seconds, or one hour. The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet attempts to rotate a token when it is older than 80 percent of its time to live. 19.5.1.1.3. OpenID Connect requirements for AWS STS You can store the public portion of the encryption keys for your OIDC configuration in a public or private S3 bucket. The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3 bucket options require a public HTTPS endpoint and private endpoints are not supported. To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type of bucket to use when you process CredentialsRequest objects during installation: By default, the CCO utility ( ccoctl ) stores the OIDC configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL. 19.5.1.2. AWS component secret formats Using manual mode with the AWS Security Token Service (STS) changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: AWS secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format using AWS STS apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.1.3. AWS component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Note These permissions apply to all resources. Unless specified, there are no request conditions on these permissions. Component Custom resource Required permissions for services Cluster CAPI Operator openshift-cluster-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeVpcs ec2:DescribeNetworkInterfaces ec2:DescribeNetworkInterfaceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Machine API Operator openshift-machine-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeRegions ec2:DescribeSubnets ec2:DescribeVpcs ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Cloud Credential Operator cloud-credential-operator-iam-ro Identity and Access Management (IAM) iam:GetUser iam:GetUserPolicy iam:ListAccessKeys Cluster Image Registry Operator openshift-image-registry S3 s3:CreateBucket s3:DeleteBucket s3:PutBucketTagging s3:GetBucketTagging s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutEncryptionConfiguration s3:GetEncryptionConfiguration s3:PutLifecycleConfiguration s3:GetLifecycleConfiguration s3:GetBucketLocation s3:ListBucket s3:GetObject s3:PutObject s3:DeleteObject s3:ListBucketMultipartUploads s3:AbortMultipartUpload s3:ListMultipartUploadParts Ingress Operator openshift-ingress Elastic load balancing elasticloadbalancing:DescribeLoadBalancers Route 53 route53:ListHostedZones route53:ListTagsForResources route53:ChangeResourceRecordSets Tag tag:GetResources Security Token Service (STS) sts:AssumeRole Cluster Network Operator openshift-cloud-network-config-controller-aws EC2 ec2:DescribeInstances ec2:DescribeInstanceStatus ec2:DescribeInstanceTypes ec2:UnassignPrivateIpAddresses ec2:AssignPrivateIpAddresses ec2:UnassignIpv6Addresses ec2:AssignIpv6Addresses ec2:DescribeSubnets ec2:DescribeNetworkInterfaces AWS Elastic Block Store CSI Driver Operator aws-ebs-csi-driver-operator EC2 ec2:AttachVolume ec2:CreateSnapshot ec2:CreateTags ec2:CreateVolume ec2:DeleteSnapshot ec2:DeleteTags ec2:DeleteVolume ec2:DescribeInstances ec2:DescribeSnapshots ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVolumesModifications ec2:DetachVolume ec2:ModifyVolume ec2:DescribeAvailabilityZones ec2:EnableFastSnapshotRestores Key Management Service (KMS) kms:ReEncrypt* kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Request condition: kms:GrantIsForAWSResource: true 19.5.1.4. OLM-managed Operator support for authentication with AWS STS In addition to OpenShift Container Platform cluster components, some Operators managed by the Operator Lifecycle Manager (OLM) on AWS clusters can use manual mode with STS. These Operators authenticate with limited-privilege, short-term credentials that are managed outside the cluster. To determine if an Operator supports authentication with AWS STS, see the Operator description in OperatorHub. Additional resources CCO-based workflow for OLM-managed Operators with AWS STS 19.5.2. GCP Workload Identity In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components use the GCP workload identity provider to allow components to impersonate GCP service accounts using short-term, limited-privilege credentials. Additional resources Configuring a GCP cluster to use short-term credentials 19.5.2.1. GCP Workload Identity authentication process Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. The following diagram details the authentication flow between GCP and the OpenShift Container Platform cluster when using GCP Workload Identity. Figure 19.3. GCP Workload Identity authentication flow 19.5.2.2. GCP component secret formats Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components. Compare the following secret content: GCP secret format apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3 1 The namespace for the component. 2 The name of the component secret. 3 The Base64 encoded service account. Content of the Base64 encoded service_account.json file using long-term credentials { "type": "service_account", 1 "project_id": "<project_id>", "private_key_id": "<private_key_id>", "private_key": "<private_key>", 2 "client_email": "<client_email_address>", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>" } 1 The credential type is service_account . 2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated. Content of the Base64 encoded service_account.json file using GCP Workload Identity { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2 "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3 "credential_source": { "file": "<path_to_token>", 4 "format": { "type": "text" } } } 1 The credential type is external_account . 2 The target audience is the GCP Workload Identity provider. 3 The resource URL of the service account that can be impersonated with these credentials. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.3. Microsoft Entra Workload ID In manual mode with Microsoft Entra Workload ID, the individual OpenShift Container Platform cluster components use the Workload ID provider to assign components short-term security credentials. Additional resources Configuring a global Microsoft Azure cluster to use short-term credentials 19.5.3.1. Microsoft Entra Workload ID authentication process The following diagram details the authentication flow between Azure and the OpenShift Container Platform cluster when using Microsoft Entra Workload ID. Figure 19.4. Workload ID authentication flow 19.5.3.2. Azure component secret formats Using manual mode with Microsoft Entra Workload ID changes the content of the Azure credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: Azure secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the Microsoft Entra ID identity that the component uses to authenticate. 4 The component secret that is used to authenticate with Microsoft Entra ID for the <client_id> identity. 5 The resource group prefix. 6 The resource group. This value is formed by the <resource_group_prefix> and the suffix -rg . Azure secret format using Microsoft Entra Workload ID apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the user-assigned managed identity that the component uses to authenticate. 4 The path to the mounted service account token file. 19.5.3.3. Azure component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Component Custom resource Required permissions for services Cloud Controller Manager Operator openshift-azure-cloud-controller-manager Microsoft.Compute/virtualMachines/read Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/read Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Cluster CAPI Operator openshift-cluster-api-azure role: Contributor [1] Machine API Operator openshift-machine-api-azure Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/disks/delete Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/skus/read Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/extensions/delete Microsoft.Compute/virtualMachines/extensions/read Microsoft.Compute/virtualMachines/extensions/write Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.Network/applicationSecurityGroups/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/loadBalancers/read Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/routeTables/read Microsoft.Network/virtualNetworks/delete Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Resources/subscriptions/resourceGroups/read Cluster Image Registry Operator openshift-image-registry-azure Data permissions Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action General permissions Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Resources/tags/write Ingress Operator openshift-ingress-azure Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/A/write Cluster Network Operator openshift-cloud-network-config-controller-azure Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/loadBalancers/backendAddressPools/join/action Azure File CSI Driver Operator azure-file-csi-driver-operator Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Azure Disk CSI Driver Operator azure-disk-csi-driver-operator Microsoft.Compute/disks/* Microsoft.Compute/snapshots/* Microsoft.Compute/virtualMachineScaleSets/*/read Microsoft.Compute/virtualMachineScaleSets/read Microsoft.Compute/virtualMachineScaleSets/virtualMachines/write Microsoft.Compute/virtualMachines/*/read Microsoft.Compute/virtualMachines/write Microsoft.Resources/subscriptions/resourceGroups/read This component requires a role rather than a set of permissions. 19.5.3.4. OLM-managed Operator support for authentication with Microsoft Entra Workload ID In addition to OpenShift Container Platform cluster components, some Operators managed by the Operator Lifecycle Manager (OLM) on Azure clusters can use manual mode with Microsoft Entra Workload ID. These Operators authenticate with short-term credentials that are managed outside the cluster. To determine if an Operator supports authentication with Workload ID, see the Operator description in OperatorHub. Additional resources CCO-based workflow for OLM-managed Operators with Microsoft Entra Workload ID 19.5.4. Additional resources Configuring an AWS cluster to use short-term credentials Configuring a GCP cluster to use short-term credentials Configuring a global Microsoft Azure cluster to use short-term credentials Preparing to update a cluster with manually maintained credentials
|
[
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>",
"cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r",
"mycluster-2mpcn",
"azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3",
"{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }",
"{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/managing-cloud-provider-credentials
|
Chapter 5. address
|
Chapter 5. address This chapter describes the commands under the address command. 5.1. address scope create Create a new Address Scope Usage: Table 5.1. Positional arguments Value Summary <name> New address scope name Table 5.2. Command arguments Value Summary -h, --help Show this help message and exit --ip-version {4,6} Ip version (default is 4) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share the address scope between projects --no-share Do not share the address scope between projects (default) Table 5.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 5.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.2. address scope delete Delete address scope(s) Usage: Table 5.7. Positional arguments Value Summary <address-scope> Address scope(s) to delete (name or id) Table 5.8. Command arguments Value Summary -h, --help Show this help message and exit 5.3. address scope list List address scopes Usage: Table 5.9. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address scopes of given name in output --ip-version <ip-version> List address scopes of given ip version networks (4 or 6) --project <project> List address scopes according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List address scopes shared between projects --no-share List address scopes not shared between projects Table 5.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 5.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.4. address scope set Set address scope properties Usage: Table 5.14. Positional arguments Value Summary <address-scope> Address scope to modify (name or id) Table 5.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set address scope name --share Share the address scope between projects --no-share Do not share the address scope between projects 5.5. address scope show Display address scope details Usage: Table 5.16. Positional arguments Value Summary <address-scope> Address scope to display (name or id) Table 5.17. Command arguments Value Summary -h, --help Show this help message and exit Table 5.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 5.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack address scope create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--ip-version {4,6}] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>",
"openstack address scope delete [-h] <address-scope> [<address-scope> ...]",
"openstack address scope list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name <name>] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]",
"openstack address scope set [-h] [--name <name>] [--share | --no-share] <address-scope>",
"openstack address scope show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-scope>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/address
|
Chapter 4. Component Versions
|
Chapter 4. Component Versions 4.1. Component Versions The full list of component versions used in Red Hat JBoss Data Grid is available at the Customer Portal at https://access.redhat.com/site/articles/488833 . Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.2_release_notes/chap-component_versions
|
Chapter 1. Rust Toolset
|
Chapter 1. Rust Toolset Rust Toolset is a Red Hat offering for developers on Red Hat Enterprise Linux (RHEL). It provides the rustc compiler for the Rust programming language, the Rust package manager Cargo, the rustfmt formatting tool, and required libraries. For Red Hat Enterprise Linux 8, Rust Toolset is available as a module. Rust Toolset is available as packages for Red Hat Enterprise Linux 9. 1.1. Rust Toolset components The following components are available as part of Rust Toolset: Name Version Description rust 1.75.0 The Rust compiler front-end for LLVM. cargo 1.75.0 A build system and dependency manager for Rust. rustfmt 1.75.0 A tool for automatic formatting of Rust code. 1.2. Rust Toolset compatibility Rust Toolset is available for Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 on the following architectures: AMD and Intel 64-bit 64-bit ARM IBM Power Systems, Little Endian 64-bit IBM Z 1.3. Installing Rust Toolset Complete the following steps to install Rust Toolset including all development and debugging tools as well as dependent packages. Note that Rust Toolset has a dependency on LLVM Toolset. Prerequisites All available Red Hat Enterprise Linux updates are installed. Procedure On Red Hat Enterprise Linux 8, install the rust-toolset module by running: On Red Hat Enterprise Linux 9, install the rust-toolset package by running: 1.4. Installing Rust documentation The The Rust Programming Language book is available as installable documentation. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure To install the rust-doc package, run the following command: On Red Hat Enterprise Linux 8: You can find the The Rust Programming Language book under the following path: /usr/share/doc/rust/html/index.html . You can find the API documentation for all Rust code packages under the following path: /usr/share/doc/rust/html/std/index.html . On Red Hat Enterprise Linux 9: You can find the The Rust Programming Language book under the following path: /usr/share/doc/rust/html/index.html . You can find the API documentation for all Rust code packages under the following path: /usr/share/doc/rust/html/std/index.html . 1.5. Installing Cargo documentation The Cargo, Rust's Package Manager book is available as installable documentation for Cargo. Note From Rust Toolset 1.66.1, the cargo-doc package is included in the rust-doc package. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure To install the cargo-doc package, run: On Red Hat Enterprise Linux 8: You can find the Cargo, Rust's Package Manager book under the following path: /usr/share/doc/cargo/html/index.html . On Red Hat Enterprise Linux 9: You can find the Cargo, Rust's Package Manager book under the following path: /usr/share/doc/cargo/html/index.html . 1.6. Additional resources For more information on the Rust programming language, see the official Rust documentation .
|
[
"yum module install rust-toolset",
"dnf install rust-toolset",
"yum install rust-doc",
"dnf install rust-doc",
"yum install cargo-doc",
"dnf install cargo-doc"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.75.0_toolset/assembly_rust-toolset_using-rust-toolset
|
probe::netfilter.bridge.forward
|
probe::netfilter.bridge.forward Name probe::netfilter.bridge.forward - Called on an incoming bridging packet destined for some other computer Synopsis netfilter.bridge.forward Values br_fd Forward delay in 1/256 secs nf_queue Constant used to signify a 'queue' verdict brhdr Address of bridge header br_mac Bridge MAC address indev Address of net_device representing input device, 0 if unknown br_msg Message age in 1/256 secs nf_drop Constant used to signify a 'drop' verdict llcproto_stp Constant used to signify Bridge Spanning Tree Protocol packet pf Protocol family -- always " bridge " br_vid Protocol version identifier indev_name Name of network device packet was received on (if known) br_poid Port identifier outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict llcpdu Address of LLC Protocol Data Unit length The length of the packet buffer contents, in bytes nf_stolen Constant used to signify a 'stolen' verdict br_cost Total cost from transmitting bridge to root nf_stop Constant used to signify a 'stop' verdict br_type BPDU type br_max Max age in 1/256 secs br_htime Hello time in 1/256 secs protocol Packet protocol br_bid Identity of bridge br_rmac Root bridge MAC address br_prid Protocol identifier outdev_name Name of network device packet will be routed to (if known) br_flags BPDU flags nf_accept Constant used to signify an 'accept' verdict br_rid Identity of root bridge
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netfilter-bridge-forward
|
Chapter 2. MTC release notes
|
Chapter 2. MTC release notes 2.1. Migration Toolkit for Containers 1.8 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.1. Migration Toolkit for Containers 1.8.5 release notes 2.1.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.5 has the following technical changes: Federal Information Processing Standard (FIPS) FIPS is a set of computer security standards developed by the United States federal government in accordance with the Federal Information Security Management Act (FISMA). Starting with version 1.8.5, MTC is designed for FIPS compliance. 2.1.1.2. Resolved issues For more information, see the list of MTC 1.8.5 resolved issues in Jira. 2.1.1.3. Known issues MTC 1.8.5 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) MTC does not patch statefulset.spec.volumeClaimTemplates[].spec.storageClassName on storage class conversion While running a Storage Class conversion for a StatefulSet application, MTC updates the persistent volume claims (PVC) references in .spec.volumeClaimTemplates[].metadata.name to use the migrated PVC names. MTC does not update spec.volumeClaimTemplates[].spec.storageClassName , which causes the application to scale up. Additionally, new replicas consume PVCs created under the old Storage Class instead of the migrated Storage Class. (MIG-1660) Performing a StorageClass conversion triggers the scale-down of all applications in the namespace When running a StorageClass conversion on more than one application, MTC scales down all the applications in the cutover phase, including those not involved in the migration. (MIG-1661) MigPlan cannot be edited to have the same target namespace as the source cluster after it is changed After changing the target namespace to something different from the source namespace while creating a MigPlan in the MTC UI, you cannot edit the MigPlan again to make the target namespace the same as the source namespace. (MIG-1600) Migrated builder pod fails to push to the image registry When migrating an application that includes BuildConfig from the source to the target cluster, the builder pod encounters an error, failing to push the image to the image registry. (BZ#2234781) Conflict condition clears briefly after it is displayed When creating a new state migration plan that results in a conflict error, the error is cleared shortly after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired warning not displayed after setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning does not appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) For a complete list of all known issues, see the list of MTC 1.8.5 known issues in Jira. 2.1.2. Migration Toolkit for Containers 1.8.4 release notes 2.1.2.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.4 has the following technical changes: MTC 1.8.4 extends its dependency resolution to include support for using OpenShift API for Data Protection (OADP) 1.4. Support for KubeVirt Virtual Machines with DirectVolumeMigration MTC 1.8.4 adds support for KubeVirt Virtual Machines (VMs) with Direct Volume Migration (DVM). 2.1.2.2. Resolved issues MTC 1.8.4 has the following major resolved issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. Earlier versions of MTC are impacted, while MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) UI stuck at Namespaces while creating a migration plan When trying to create a migration plan from the MTC UI, the migration plan wizard becomes stuck at the Namespaces step. This issue has been resolved in MTC 1.8.4. (MIG-1597) Migration fails with error of no matches for kind Virtual machine in version kubevirt/v1 During the migration of an application, all the necessary steps, including the backup, DVM, and restore, are successfully completed. However, the migration is marked as unsuccessful with the error message no matches for kind Virtual machine in version kubevirt/v1 . (MIG-1594) Direct Volume Migration fails when migrating to a namespace different from the source namespace On performing a migration from source cluster to target cluster, with the target namespace different from the source namespace, the DVM fails. (MIG-1592) Direct Image Migration does not respect label selector on migplan When using Direct Image Migration (DIM), if a label selector is set on the migration plan, DIM does not respect it and attempts to migrate all imagestreams in the namespace. (MIG-1533) 2.1.2.3. Known issues MTC 1.8.4 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . Rsync pod fails to start causing the DVM phase to fail The DVM phase fails due to the Rsync pod failing to start, because of a permission issue. (BZ#2231403) Migrated builder pod fails to push to image registry When migrating an application including BuildConfig from source to target cluster, the builder pod results in error, failing to push the image to the image registry. (BZ#2234781) Conflict condition gets cleared briefly after it is created When creating a new state migration plan that results in a conflict error, that error is cleared shorty after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired Warning Not Displayed After Setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning fails to appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) 2.1.3. Migration Toolkit for Containers 1.8.3 release notes 2.1.3.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.3 has the following technical changes: OADP 1.3 is now supported MTC 1.8.3 adds support to OpenShift API for Data Protection (OADP) as a dependency of MTC 1.8.z. 2.1.3.2. Resolved issues MTC 1.8.3 has the following major resolved issues: CVE-2024-24786: Flaw in Golang protobuf module causes unmarshal function to enter infinite loop In releases of MTC, a vulnerability was found in Golang's protobuf module, where the unmarshal function entered an infinite loop while processing certain invalid inputs. Consequently, an attacker provided carefully constructed invalid inputs, which caused the function to enter an infinite loop. With this update, the unmarshal function works as expected. For more information, see CVE-2024-24786 . CVE-2023-45857: Axios Cross-Site Request Forgery Vulnerability In releases of MTC, a vulnerability was discovered in Axios 1.5.1 that inadvertently revealed a confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to the host, allowing attackers to view sensitive information. For more information, see CVE-2023-45857 . Restic backup does not work properly when the source workload is not quiesced In releases of MTC, some files did not migrate when deploying an application with a route. The Restic backup did not function as expected when the quiesce option was unchecked for the source workload. This issue has been resolved in MTC 1.8.3. For more information, see BZ#2242064 . The Migration Controller fails to install due to an unsupported value error in Velero The MigrationController failed to install due to an unsupported value error in Velero. Updating OADP 1.3.0 to OADP 1.3.1 resolves this problem. For more information, see BZ#2267018 . This issue has been resolved in MTC 1.8.3. For a complete list of all resolved issues, see the list of MTC 1.8.3 resolved issues in Jira. 2.1.3.3. Known issues Migration Toolkit for Containers (MTC) 1.8.3 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform version 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . For a complete list of all known issues, see the list of MTC 1.8.3 known issues in Jira. 2.1.4. Migration Toolkit for Containers 1.8.2 release notes 2.1.4.1. Resolved issues This release has the following major resolved issues: Backup phase fails after setting custom CA replication repository In releases of Migration Toolkit for Containers (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurred during the backup phase. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution In releases of (MTC), versions before 4.1.3 of the tough-cookie package used in MTC were vulnerable to prototype pollution. This vulnerability occurred because CookieJar did not handle cookies properly when the value of the rejectPublicSuffixes was set to false . For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, were vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data was provided as a range. For more details, see (CVE-2022-25883) 2.1.4.2. Known issues MTC 1.8.2 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.5. Migration Toolkit for Containers 1.8.1 release notes 2.1.5.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.1 has the following major resolved issues: CVE-2023-39325: golang: net/http, x/net/http2: rapid stream resets can cause excessive work A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which is used by MTC. A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (BZ#2245079) It is advised to update to MTC 1.8.1 or later, which resolve this issue. For more details, see (CVE-2023-39325) and (CVE-2023-44487) 2.1.5.2. Known issues Migration Toolkit for Containers (MTC) 1.8.1 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes. An exception, ValueError: too many values to unpack , is returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.6. Migration Toolkit for Containers 1.8.0 release notes 2.1.6.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.0 has the following resolved issues: Indirect migration is stuck on backup stage In releases, an indirect migration became stuck at the backup stage, due to InvalidImageName error. ( (BZ#2233097) ) PodVolumeRestore remain In Progress keeping the migration stuck at Stage Restore In releases, on performing an indirect migration, the migration became stuck at the Stage Restore step, waiting for the podvolumerestore to be completed. ( (BZ#2233868) ) Migrated application unable to pull image from internal registry on target cluster In releases, on migrating an application to the target cluster, the migrated application failed to pull the image from the internal image registry resulting in an application failure . ( (BZ#2233103) ) Migration failing on Azure due to authorization issue In releases, on an Azure cluster, when backing up to Azure storage, the migration failed at the Backup stage. ( (BZ#2238974) ) 2.1.6.2. Known issues MTC 1.8.0 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception ValueError: too many values to unpack returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) Old Restic pods are not getting removed on upgrading MTC 1.7.x 1.8.x In this release, on upgrading the MTC Operator from 1.7.x to 1.8.x, the old Restic pods are not being removed. Therefore after the upgrade, both Restic and node-agent pods are visible in the namespace. ( (BZ#2236829) ) Migrated builder pod fails to push to image registry In this release, on migrating an application including a BuildConfig from a source to target cluster, builder pod results in error , failing to push the image to the image registry. ( (BZ#2234781) ) [UI] CA bundle file field is not properly cleared In this release, after enabling Require SSL verification and adding content to the CA bundle file for an MCG NooBaa bucket in MigStorage, the connection fails as expected. However, when reverting these changes by removing the CA bundle content and clearing Require SSL verification , the connection still fails. The issue is only resolved by deleting and re-adding the repository. ( (BZ#2240052) ) Backup phase fails after setting custom CA replication repository In (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurs during the backup phase. This issue is resolved in MTC 1.8.2. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions before 4.1.3 of the tough-cookie package, used in MTC, are vulnerable to prototype pollution. This vulnerability occurs because CookieJar does not handle cookies properly when the value of the rejectPublicSuffixes is set to false . This issue is resolved in MTC 1.8.2. For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, are vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data is provided as a range. This issue is resolved in MTC 1.8.2. For more details, see (CVE-2022-25883) 2.1.6.3. Technical changes This release has the following technical changes: Migration from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy Migration Toolkit for Containers Operator and Migration Toolkit for Containers 1.7.x. Migration from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. Migration Toolkit for Containers (MTC) 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x might be used. However, but it must be the same MTC 1.Y.z on both source and destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported. MTC 1.8.x by default installs OADP 1.2.x. Upgrading from MTC 1.7.x to MTC 1.8.0, requires manually changing the OADP channel to 1.2. If this is not done, the upgrade of the Operator fails. 2.2. Migration Toolkit for Containers 1.7 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.15 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.2.1. Migration Toolkit for Containers 1.7.18 release notes Migration Toolkit for Containers (MTC) 1.7.18 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of MTC 1.7.17. 2.2.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.7.18 has the following technical changes: Federal Information Processing Standard (FIPS) FIPS is a set of computer security standards developed by the United States federal government in accordance with the Federal Information Security Management Act (FISMA). Starting with version 1.7.18, MTC is designed for FIPS compliance. 2.2.2. Migration Toolkit for Containers 1.7.17 release notes Migration Toolkit for Containers (MTC) 1.7.17 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of MTC 1.7.16. 2.2.3. Migration Toolkit for Containers 1.7.16 release notes 2.2.3.1. Resolved issues This release has the following resolved issues: CVE-2023-45290: Golang: net/http : Memory exhaustion in the Request.ParseMultipartForm method A flaw was found in the net/http Golang standard library package, which impacts earlier versions of MTC. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile methods, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2023-45290 CVE-2024-24783: Golang: crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts earlier versions of MTC. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24783 . CVE-2024-24784: Golang: net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts earlier versions of MTC. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24784 . CVE-2024-24785: Golang: html/template : Errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts earlier versions of MTC. If errors returned from MarshalJSON methods contain user-controlled data, they could be used to break the contextual auto-escaping behavior of the html/template package, allowing subsequent actions to inject unexpected content into templates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24785 . CVE-2024-29180: webpack-dev-middleware : Lack of URL validation may lead to file leak A flaw was found in the webpack-dev-middleware package , which impacts earlier versions of MTC. This flaw fails to validate the supplied URL address sufficiently before returning local files, which could allow an attacker to craft URLs to return arbitrary local files from the developer's machine. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-29180 . CVE-2024-30255: envoy : HTTP/2 CPU exhaustion due to CONTINUATION frame flood A flaw was found in how the envoy proxy implements the HTTP/2 codec, which impacts earlier versions of MTC. There are insufficient limitations placed on the number of CONTINUATION frames that can be sent within a single stream, even after exceeding the header map limits of envoy . This flaw could allow an unauthenticated remote attacker to send packets to vulnerable servers. These packets could consume compute resources and cause a denial of service (DoS). To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-30255 . 2.2.3.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with a Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, but the Direct Volume Migration (DVM) fails with the rsync pod on the source namespace moving into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that returns a conflict error message, the error message is cleared very shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations of different provider types configured in a cluster When there are multiple Volume Snapshot Locations (VSLs) in a cluster with different provider types, but you have not set any of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.4. Migration Toolkit for Containers 1.7.15 release notes 2.2.4.1. Resolved issues This release has the following resolved issues: CVE-2024-24786: A flaw was found in Golang's protobuf module, where the unmarshal function can enter an infinite loop A flaw was found in the protojson.Unmarshal function that could cause the function to enter an infinite loop when unmarshaling certain forms of invalid JSON messages. This condition could occur when unmarshaling into a message that contained a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option was set in a JSON-formatted message. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-24786) . CVE-2024-28180: jose-go improper handling of highly compressed data A vulnerability was found in Jose due to improper handling of highly compressed data. An attacker could send a JSON Web Encryption (JWE) encrypted message that contained compressed data that used large amounts of memory and CPU when decompressed by the Decrypt or DecryptMulti functions. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-28180) . 2.2.4.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, and Direct Volume Migration (DVM) fails with the rsync pod on the source namespace going into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that results in a conflict error message, the error message is cleared shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations (VSLs) of different provider types configured in a cluster with no specified default VSL. When there are multiple VSLs in a cluster with different provider types, and you set none of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.5. Migration Toolkit for Containers 1.7.14 release notes 2.2.5.1. Resolved issues This release has the following resolved issues: CVE-2023-39325 CVE-2023-44487: various flaws A flaw was found in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream then immediately send an RST_STREAM frame to cancel those requests. This activity created additional workloads for the server in terms of setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. (BZ#2243564) (BZ#2244013) (BZ#2244014) (BZ#2244015) (BZ#2244016) (BZ#2244017) To resolve this issue, upgrade to MTC 1.7.14. For more details, see (CVE-2023-44487) and (CVE-2023-39325) . CVE-2023-39318 CVE-2023-39319 CVE-2023-39321: various flaws (CVE-2023-39318) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not properly handle HTML-like "" comment tokens, or the hashbang "#!" comment tokens, in <script> contexts. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39319) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not apply the proper rules for handling occurrences of "<script" , "<!--" , and "</script" within JavaScript literals in <script> contexts. This could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39321) : A flaw was discovered in Golang, utilized by MTC. Processing an incomplete post-handshake message for a QUIC connection could cause a panic. (BZ#2238062) (BZ#2238088) (CVE-2023-3932) : A flaw was discovered in Golang, utilized by MTC. Connections using the QUIC transport protocol did not set an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. (BZ#2238088) To resolve these issues, upgrade to MTC 1.7.14. For more details, see (CVE-2023-39318) , (CVE-2023-39319) , and (CVE-2023-39321) . 2.2.5.2. Known issues There are no major known issues in this release. 2.2.6. Migration Toolkit for Containers 1.7.13 release notes 2.2.6.1. Resolved issues There are no major resolved issues in this release. 2.2.6.2. Known issues There are no major known issues in this release. 2.2.7. Migration Toolkit for Containers 1.7.12 release notes 2.2.7.1. Resolved issues There are no major resolved issues in this release. 2.2.7.2. Known issues This release has the following known issues: Error code 504 is displayed on the Migration details page On the Migration details page, at first, the migration details are displayed without any issues. However, after sometime, the details disappear, and a 504 error is returned. ( BZ#2231106 ) Old restic pods are not removed when upgrading Migration Toolkit for Containers 1.7.x to Migration Toolkit for Containers 1.8 On upgrading the Migration Toolkit for Containers (MTC) operator from 1.7.x to 1.8.x, the old restic pods are not removed. After the upgrade, both restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) 2.2.8. Migration Toolkit for Containers 1.7.11 release notes 2.2.8.1. Resolved issues There are no major resolved issues in this release. 2.2.8.2. Known issues There are no known issues in this release. 2.2.9. Migration Toolkit for Containers 1.7.10 release notes 2.2.9.1. Resolved issues This release has the following major resolved issue: Adjust rsync options in DVM In this release, you can prevent absolute symlinks from being manipulated by Rsync in the course of direct volume migration (DVM). Running DVM in privileged mode preserves absolute symlinks inside the persistent volume claims (PVCs). To switch to privileged mode, in the MigrationController CR, set the migration_rsync_privileged spec to true . ( BZ#2204461 ) 2.2.9.2. Known issues There are no known issues in this release. 2.2.10. Migration Toolkit for Containers 1.7.9 release notes 2.2.10.1. Resolved issues There are no major resolved issues in this release. 2.2.10.2. Known issues This release has the following known issue: Adjust rsync options in DVM In this release, users are unable to prevent absolute symlinks from being manipulated by rsync during direct volume migration (DVM). ( BZ#2204461 ) 2.2.11. Migration Toolkit for Containers 1.7.8 release notes 2.2.11.1. Resolved issues This release has the following major resolved issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) Adding a MigCluster from the UI fails when the domain name has more than six characters In releases, adding a MigCluster from the UI failed when the domain name had more than six characters. The UI code expected a domain name of between two and six characters. ( BZ#2152149 ) UI fails to render the Migrations' page: Cannot read properties of undefined (reading 'name') In releases, the UI failed to render the Migrations' page, returning Cannot read properties of undefined (reading 'name') . ( BZ#2163485 ) Creating DPA resource fails on Red Hat OpenShift Container Platform 4.6 clusters In releases, when deploying MTC on an OpenShift Container Platform 4.6 cluster, the DPA failed to be created according to the logs, which resulted in some pods missing. From the logs in the migration-controller in the OpenShift Container Platform 4.6 cluster, it indicated that an unexpected null value was passed, which caused the error. ( BZ#2173742 ) 2.2.11.2. Known issues There are no known issues in this release. 2.2.12. Migration Toolkit for Containers 1.7.7 release notes 2.2.12.1. Resolved issues There are no major resolved issues in this release. 2.2.12.2. Known issues There are no known issues in this release. 2.2.13. Migration Toolkit for Containers 1.7.6 release notes 2.2.13.1. New features Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12 With the incoming enforcement of Pod Security Admission (PSA) in OpenShift Container Platform 4.12 the default pod would run with a restricted profile. This restricted profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The following enhancement outlines the changes that would be required to remain compatible with OCP 4.12. ( MIG-1240 ) 2.2.13.2. Resolved issues This release has the following major resolved issues: Unable to create Storage Class Conversion plan due to missing cronjob error in Red Hat OpenShift Platform 4.12 In releases, on the persistent volumes page, an error is thrown that a CronJob is not available in version batch/v1beta1 , and when clicking on cancel, the migplan is created with status Not ready . ( BZ#2143628 ) 2.2.13.3. Known issues This release has the following known issue: Conflict conditions are cleared briefly after they are created When creating a new state migration plan that will result in a conflict error, that error is cleared shorty after it is displayed. ( BZ#2144299 ) 2.2.14. Migration Toolkit for Containers 1.7.5 release notes 2.2.14.1. Resolved issues This release has the following major resolved issue: Direct Volume Migration is failing as rsync pod on source cluster move into Error state In release, migration succeeded with warnings but Direct Volume Migration failed with rsync pod on source namespace going into error state. ( *BZ#2132978 ) 2.2.14.2. Known issues This release has the following known issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) When editing a MigHook in the UI, the page might fail to reload The UI might fail to reload when editing a hook if there is a network connection issue. After the network connection is restored, the page will fail to reload until the cache is cleared. ( BZ#2140208 ) 2.2.15. Migration Toolkit for Containers 1.7.4 release notes 2.2.15.1. Resolved issues There are no major resolved issues in this release. 2.2.15.2. Known issues Rollback missing out deletion of some resources from the target cluster On performing the roll back of an application from the Migration Toolkit for Containers (MTC) UI, some resources are not being deleted from the target cluster and the roll back is showing a status as successfully completed. ( BZ#2126880 ) 2.2.16. Migration Toolkit for Containers 1.7.3 release notes 2.2.16.1. Resolved issues This release has the following major resolved issues: Correct DNS validation for destination namespace In releases, the MigPlan could not be validated if the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) Deselecting all PVCs from UI still results in an attempted PVC transfer In releases, while doing a full migration, unselecting the persistent volume claims (PVCs) would not skip selecting the PVCs and still try to migrate them. ( BZ#2106073 ) Incorrect DNS validation for destination namespace In releases, MigPlan could not be validated because the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) 2.2.16.2. Known issues There are no known issues in this release. 2.2.17. Migration Toolkit for Containers 1.7.2 release notes 2.2.17.1. Resolved issues This release has the following major resolved issues: MTC UI does not display logs correctly In releases, the Migration Toolkit for Containers (MTC) UI did not display logs correctly. ( BZ#2062266 ) StorageClass conversion plan adding migstorage reference in migplan In releases, StorageClass conversion plans had a migstorage reference even though it was not being used. ( BZ#2078459 ) Velero pod log missing from downloaded logs In releases, when downloading a compressed (.zip) folder for all logs, the velero pod was missing. ( BZ#2076599 ) Velero pod log missing from UI drop down In releases, after a migration was performed, the velero pod log was not included in the logs provided in the dropdown list. ( BZ#2076593 ) Rsync options logs not visible in log-reader pod In releases, when trying to set any valid or invalid rsync options in the migrationcontroller , the log-reader was not showing any logs regarding the invalid options or about the rsync command being used. ( BZ#2079252 ) Default CPU requests on Velero/Restic are too demanding and fail in certain environments In releases, the default CPU requests on Velero/Restic were too demanding and fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values were high. ( BZ#2088022 ) 2.2.17.2. Known issues This release has the following known issues: Updating the replication repository to a different storage provider type is not respected by the UI After updating the replication repository to a different type and clicking Update Repository , it shows connection successful, but the UI is not updated with the correct details. When clicking on the Edit button again, it still shows the old replication repository information. Furthermore, when trying to update the replication repository again, it still shows the old replication details. When selecting the new repository, it also shows all the information you entered previously and the Update repository is not enabled, as if there are no changes to be submitted. ( BZ#2102020 ) Migrations fails because the backup is not found Migration fails at the restore stage because of initial backup has not been found. ( BZ#2104874 ) Update Cluster button is not enabled when updating Azure resource group When updating the remote cluster, selecting the Azure resource group checkbox, and adding a resource group does not enable the Update cluster option. ( BZ#2098594 ) Error pop-up in UI on deleting migstorage resource When creating a backupStorage credential secret in OpenShift Container Platform, if the migstorage is removed from the UI, a 404 error is returned and the underlying secret is not removed. ( BZ#2100828 ) Miganalytic resource displaying resource count as 0 in UI After creating a migplan from backend, the Miganalytic resource displays the resource count as 0 in UI. ( BZ#2102139 ) Registry validation fails when two trailing slashes are added to the Exposed route host to image registry After adding two trailing slashes, meaning // , to the exposed registry route, the MigCluster resource is showing the status as connected . When creating a migplan from backend with DIM, the plans move to the unready status. ( BZ#2104864 ) Service Account Token not visible while editing source cluster When editing the source cluster that has been added and is in Connected state, in the UI, the service account token is not visible in the field. To save the wizard, you have to fetch the token again and provide details inside the field. ( BZ#2097668 ) 2.2.18. Migration Toolkit for Containers 1.7.1 release notes 2.2.18.1. Resolved issues There are no major resolved issues in this release. 2.2.18.2. Known issues This release has the following known issues: Incorrect DNS validation for destination namespace MigPlan cannot be validated because the destination namespace starts with a non-alphabetic character. ( BZ#2102231 ) Cloud propagation phase in migration controller is not functioning due to missing labels on Velero pods The Cloud propagation phase in the migration controller is not functioning due to missing labels on Velero pods. The EnsureCloudSecretPropagated phase in the migration controller waits until replication repository secrets are propagated on both sides. As this label is missing on Velero pods, the phase is not functioning as expected. ( BZ#2088026 ) Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values are high. The resources can be configured in DPA using the podConfig field for Velero and Restic. Migration operator should set CPU requests to a lower value, such as 100m, so that Velero and Restic pods can be scheduled in resource constrained environments Migration Toolkit for Containers (MTC) often operates in. ( BZ#2088022 ) Warning is displayed on persistentVolumes page after editing storage class conversion plan A warning is displayed on the persistentVolumes page after editing the storage class conversion plan. When editing the existing migration plan, a warning is displayed on the UI At least one PVC must be selected for Storage Class Conversion . ( BZ#2079549 ) Velero pod log missing from downloaded logs When downloading a compressed (.zip) folder for all logs, the velero pod is missing. ( BZ#2076599 ) Velero pod log missing from UI drop down After a migration is performed, the velero pod log is not included in the logs provided in the dropdown list. ( BZ#2076593 ) 2.2.19. Migration Toolkit for Containers 1.7.0 release notes 2.2.19.1. New features and enhancements This release has the following new features and enhancements: The Migration Toolkit for Containers (MTC) Operator now depends upon the OpenShift API for Data Protection (OADP) Operator. When you install the MTC Operator, the Operator Lifecycle Manager (OLM) automatically installs the OADP Operator in the same namespace. You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters by using the crane tunnel-api command. Converting storage classes in the MTC web console: You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. 2.2.19.2. Known issues This release has the following known issues: MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Direct and indirect data transfers do not work if the destination storage is a PV that is dynamically provisioned by the AWS Elastic File System (EFS). This is due to limitations of the AWS EFS Container Storage Interface (CSI) driver. ( BZ#2085097 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . MTC 1.7.6 cannot migrate cron jobs from source clusters that support v1beta1 cron jobs to clusters of OpenShift Container Platform 4.12 and later, which do not support v1beta1 cron jobs. ( BZ#2149119 ) 2.3. Migration Toolkit for Containers 1.6 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.15 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.3.1. Migration Toolkit for Containers 1.6 release notes 2.3.1.1. New features and enhancements This release has the following new features and enhancements: State migration: You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). "New operator version available" notification: The Clusters page of the MTC web console displays a notification when a new Migration Toolkit for Containers Operator is available. 2.3.1.2. Deprecated features The following features are deprecated: MTC version 1.4 is no longer supported. 2.3.1.3. Known issues This release has the following known issues: On OpenShift Container Platform 3.10, the MigrationController pod takes too long to restart. The Bugzilla report contains a workaround. ( BZ#1986796 ) Stage pods fail during direct volume migration from a classic OpenShift Container Platform source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. ( BZ#1887526 ) MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.4. Migration Toolkit for Containers 1.5 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.15 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.4.1. Migration Toolkit for Containers 1.5 release notes 2.4.1.1. New features and enhancements This release has the following new features and enhancements: The Migration resource tree on the Migration details page of the web console has been enhanced with additional resources, Kubernetes events, and live status information for monitoring and debugging migrations. The web console can support hundreds of migration plans. A source namespace can be mapped to a different target namespace in a migration plan. Previously, the source namespace was mapped to a target namespace with the same name. Hook phases with status information are displayed in the web console during a migration. The number of Rsync retry attempts is displayed in the web console during direct volume migration. Persistent volume (PV) resizing can be enabled for direct volume migration to ensure that the target cluster does not run out of disk space. The threshold that triggers PV resizing is configurable. Previously, PV resizing occurred when the disk usage exceeded 97%. Velero has been updated to version 1.6, which provides numerous fixes and enhancements. Cached Kubernetes clients can be enabled to provide improved performance. 2.4.1.2. Deprecated features The following features are deprecated: MTC versions 1.2 and 1.3 are no longer supported. The procedure for updating deprecated APIs has been removed from the troubleshooting section of the documentation because the oc convert command is deprecated. 2.4.1.3. Known issues This release has the following known issues: Microsoft Azure storage is unavailable if you create more than 400 migration plans. The MigStorage custom resource displays the following message: The request is being throttled as the limit has been reached for operation type . ( BZ#1977226 ) If a migration fails, the migration plan does not retain custom persistent volume (PV) settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) PV resizing does not work as expected for AWS gp2 storage unless the pv_resizing_threshold is 42% or greater. ( BZ#1973148 ) PV resizing does not work with OpenShift Container Platform 3.7 and 3.9 source clusters in the following scenarios: The application was installed after MTC was installed. An application pod was rescheduled on a different node after MTC was installed. OpenShift Container Platform 3.7 and 3.9 do not support the Mount Propagation feature that enables Velero to mount PVs automatically in the Restic pod. The MigAnalytic custom resource (CR) fails to collect PV data from the Restic pod and reports the resources as 0 . The MigPlan CR displays a status similar to the following: Example output status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: "True" type: ExtendedPVAnalysisFailed To enable PV resizing, you can manually restart the Restic daemonset on the source cluster or restart the Restic pods on the same nodes as the application. If you do not restart Restic, you can run the direct volume migration without PV resizing. ( BZ#1982729 ) 2.4.1.4. Technical changes This release has the following technical changes: The legacy Migration Toolkit for Containers Operator version 1.5.1 is installed manually on OpenShift Container Platform versions 3.7 to 4.5. The Migration Toolkit for Containers Operator version 1.5.1 is installed on OpenShift Container Platform versions 4.6 and later by using the Operator Lifecycle Manager.
|
[
"status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migration_toolkit_for_containers/mtc-release-notes-1
|
B.2. Pacemaker Installation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7
|
B.2. Pacemaker Installation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using the pcs configuration tool. There are, however, some differences in cluster installation between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 when using Pacemaker. The following commands install the Red Hat High Availability Add-On software packages that Pacemaker requires in Red Hat Enterprise Linux 6 and prevent corosync from starting without cman . You must enter these commands on each node in the cluster. On each node in the cluster, you set up a password for the pcs administration account named hacluster , and you start and enable the pcsd service. On one node in the cluster, you then authenticate the administration account for the nodes of the cluster. In Red Hat Enterprise Linux 7, you run the following commands on each node in the cluster to install the Red Hat High Availability Add-On software packages that Pacemaker requires, set up a password for the pcs administration account named hacluster , and start and enable the pcsd service, In Red Hat Enterprise Linux 7, as in Red Hat Enterprise Linux 6, you authenticate the administration account for the nodes of the cluster by running the following command on one node in the cluster. For further information on installation in Red Hat Enterprise Linux 7, see Chapter 1, Red Hat High Availability Add-On Configuration and Management Reference Overview and Chapter 4, Cluster Creation and Administration .
|
[
"yum install pacemaker cman pcs chkconfig corosync off chkconfig cman off",
"passwd hacluster service pcsd start chkconfig pcsd on",
"pcs cluster auth [ node ] [...] [-u username ] [-p password ]",
"yum install pcs pacemaker fence-agents-all passwd hacluster systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth [ node ] [...] [-u username ] [-p password ]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-pacemaker65-70-haar
|
A.8. Live Migration Errors
|
A.8. Live Migration Errors There may be cases where a guest changes memory too fast, and the live migration process has to transfer it over and over again, and fails to finish (converge). The current live-migration implementation has a default migration time configured to 30ms. This value determines the guest pause time at the end of the migration in order to transfer the leftovers. Higher values increase the odds that live migration will converge
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-live_migration_errors
|
Chapter 7. Infrastructure requirements
|
Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/infrastructure-requirements_rhodf
|
Installing on a single node
|
Installing on a single node OpenShift Container Platform 4.12 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team
|
[
"example.com",
"<cluster_name>.example.com",
"export OCP_VERSION=<ocp_version> 1",
"export ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"apiVersion: v1 baseDomain: <domain> 1 compute: - architecture: amd64 2 name: worker replicas: 0 3 controlPlane: architecture: amd64 name: master replicas: 1 4 metadata: name: <name> 5 networking: 6 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 7 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'",
"coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso",
"./openshift-install --dir=ocp wait-for install-complete",
"export KUBECONFIG=ocp/auth/kubeconfig",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.25.0",
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"variant: openshift version: 4.12.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'",
"butane -pr embedded.yaml -o embedded.ign",
"coreos-installer iso ignition embed -i embedded.ign rhcos-4.12.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.12.0-x86_64-live.x86_64.iso",
"coreos-installer iso ignition show rhcos-sshd-4.12.0-x86_64-live.x86_64.iso",
"{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_a_single_node/index
|
Chapter 2. Differences from upstream OpenJDK 8
|
Chapter 2. Differences from upstream OpenJDK 8 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 8 changes: FIPS support. Red Hat build of OpenJDK 8 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 8 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 8 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.392/rn-openjdk-diff-from-upstream
|
Chapter 2. Top new features
|
Chapter 2. Top new features This section provides an overview of the top new features in this release of Red Hat OpenStack Platform (RHOSP). 2.1. Backup and restore This section outlines the top new features related to backing up and restoring the Red Hat OpenStack Platform (RHOSP) undercloud and control plane nodes. Snapshot and revert The RHOSP snapshot and revert feature is based on the Logical Volume Manager (LVM) snapshot functionality and reverts an unsuccessful upgrade or update. Snapshots preserve the original disk state of your RHOSP cluster before performing an upgrade or an update. You can then remove or revert the snapshots depending on the results. If an upgrade completes successfully and you do not need the snapshots anymore, remove them from your nodes. If an upgrade fails, you can revert the snapshots, assess any errors, and start the upgrade procedure again. A revert leaves the disks of all the nodes exactly as they were when the snapshot was taken. 2.2. Bare Metal provisioning This section outlines the top new features for the Red Hat OpenStack Platform (RHOSP) Bare Metal Provisioning service (ironic). LVM thin provisioning In RHOSP 17.1, the LVM volumes installed by the overcloud-hardened-uefi-full.qcow2 whole disk overcloud image are now backed by a thin pool. By default, the volumes expand to consume the available physical storage, but they are not over-provisioned. 2.3. Compute This section outlines the top new features for the Red Hat OpenStack Platform (RHOSP) Compute service (nova). Moving to Q35 default machine type The default machine type for each host architecture is Q35 for new RHOSP 17 deployments. The Q35 machine type provides several benefits and improvements, including live migration of instances between different RHEL 9.x minor releases, and native PCIe hotplug, which is faster than the ACPI hotplug used by the i440FX machine type. You can still use the i440FX machine type. Emulated virtual Trusted Platform Module (vTPM) devices for instances You can use TPM to enhance computer security and provide a chain of trust for virtualization. The emulated vTPM is a software-based representation of a physical TPM chip. An administrator can provide cloud users the ability to create instances that have vTPM devices. UEFI Secure Boot Cloud users can launch instances that are protected with UEFI Secure Boot when the overcloud contains UEFI Secure Boot Compute nodes. For information about creating an image for UEFI Secure Boot, see Creating an image for UEFI Secure Boot . For information about creating a flavor for UEFI Secure Boot, see "UEFI Secure Boot" in Flavor metadata . Ability to create instances that have a mix of dedicated and shared CPUs You can now create flavors that have a mixed CPU policy to enable your cloud users to create instances that have a mix of dedicated (pinned) and shared (unpinned) CPUs. VirtIO data path acceleration (VDPA) support for enterprise workloads On RHOSP deployments that are configured for OVS hardware offload and to use ML2/OVN, and that have Compute nodes with VDPA devices and drivers and Mellanox NICs, you can enable your cloud users to create instances that use VirtIO data path acceleration (VDPA) ports. For more information, see Configuring VDPA Compute nodes to enable instances that use VDPA ports and Creating an instance with a VDPA interface . Scheduler support for routed networks On RHOSP deployments that use a routed provider network, you can now configure the Compute scheduler to filter Compute nodes that have affinity with routed network segments, and verify the network in placement before scheduling an instance on a Compute node. You can enable this feature by using the NovaSchedulerQueryPlacementForRoutedNetworkAggregates parameter. 2.4. Distributed Compute Nodes (DCN) This section outlines the top new features for Distributed Compute Nodes (DCN). Framework for upgrades for Distributed Compute Node Architecture In RHOSP 17.1.3, Red Hat now supports upgrading edge deployed architectures from 16.2 to 17.1 using the framework for upgrades workflow. 2.5. Networking This section outlines the top new features for the Red Hat OpenStack Platform (RHOSP) Networking service. Revert back to the OVS mechanism driver after failed migration to OVN Starting in RHOSP 17.1.3 you can revert a failed or interrupted migration if you first follow the proper backup steps and revert instructions. The reverted OVS environment might be altered from the original. For example, if you migrate to the OVN mechanism driver, then migrate an instance to another Compute node, and then revert the OVN migration, the instance will be on the original Compute node. Also, a revert operation interrupts connection to the data plane. HTTP/2 listener support for TLS-terminated load balancers RHOSP 17.1.2 introduces support for TLS-terminated HTTP/2 listeners. HTTP/2 listeners enable you to improve the user experience by loading web pages faster and by employing the Application-Layer Protocol Negotiation (ALPN) TLS extension when load balancers negotiate with clients. For more information about HTTP/2 listener support, see Creating a TLS-terminated load balancer with an HTTP/2 listener in Configuring load balancing as a service . Migration to OVN mechanism driver Migrations from the OVS mechanism driver to the OVN mechanism driver were not supported in RHOSP 17.0 because upgrades to RHOSP 17.0 were not supported. OVN migration is now supported in RHOSP 17.1 GA. You have the choice of migrating from ML2/OVS in 16.2 or 17.1. In most cases, Red Hat recommends upgrading to RHOSP 17.1 before migrating to ML2/OVN, because of enhanced functionality and improved migration functions in RHOSP 17.1. Stateless security groups This RHOSP release introduces support of the OpenStack stateless security groups API with the ML2/OVN mechanism driver. Stateless security groups are not supported by RHOSP deployments with the ML2/OVS mechanism driver. A stateless security group can provide performance benefits because it bypasses connection tracking in the underlying firewall, providing an option to offload conntrack-related OpenFlow rules to hardware. For more information about stateless security groups, see Configuring security groups . Security group logging To monitor traffic flows and attempts into and out of an instance, you can create packet logs for security groups. Each log generates a stream of data about events and appends it to a common log file on the Compute host from which the instance was launched. You can associate any port of an instance with one or more security groups and define one or more rules for each security group. For example, you can create a rule to allow inbound SSH traffic to any instance in a security group named finance. You can create another rule in the same security group to allow instances in that group to send and respond to ICMP (ping) messages. Then you can create packet logs to record combinations of packet flow events with the related security groups. Quality of Service (QoS) for egress on hardware offloaded ports Starting with RHOSP 17.1, in ML2/OVN deployments, you can enable minimum bandwidth and bandwidth limit egress policies for hardware offloaded ports. You cannot enable ingress policies for hardware offloaded ports. For more information, see Configuring the Networking service for QoS policies . Open vSwitch (OVS) Poll Mode Driver (PMD) Auto Load Balance Starting in RHOSP 17.1, OVS PMD moves from technology preview to full support. You can use Open vSwitch (OVS) Poll Mode Driver (PMD) threads to perform the following tasks for user space context switching: Continuous polling of input ports for packets. Classifying received packets. Executing actions on the packets after classification. For more information, see Configuring DPDK parameters for node provisioning . 2.6. Network Functions Virtualization This section outlines the top new features for Red Hat OpenStack Platform (RHOSP) Network Functions Virtualization (NFV). OVS and OVN TC Flower offload with Conntrack In RHOSP 17.1, connection tracking (conntrack) hardware offloading is supported for ML2/OVS and ML2/OVN with TC Flower. For the conntrack module to offload openflow flows to hardware, you must enable security groups and port security on switchdev ports. For more information, see Configuring OVS TC-flower hardware offload . 2.7. Security This section outlines the top new features for security components in Red Hat OpenStack Platform (RHOSP). FIPS 140-3 compatibility is now fully supported You can now enable FIPS 140-3 compatibility mode with RHOSP. SRBAC is now fully supported You can now enable secure role-based access control in RHOSP. 2.8. Storage This section outlines the top new features for the Red Hat OpenStack Platform (RHOSP) storage services. Upgrade Red Hat Ceph Storage 5 to 6 Upgrading your Red Hat Ceph Storage cluster from version 5 to version 6 is now supported as a step in upgrading your RHOSP to RHOSP 17.1.2. Upgrading directly from Red Hat Ceph Storage version 4 to version 6 is not supported. If you are currently using Red Hat Ceph Storage version 4, you must upgrade to Red Hat Ceph Storage version 5 before upgrading to Red Hat Ceph Storage version 6. For more information about this procedure see, Framework for upgrades (16.2 to 17.1) . Red Hat Ceph Storage 6 If you deploy greenfield RHOSP 17.1 with Red Hat Ceph Storage (RHCS), RHOSP is deployed with RHCS 6.1. RHCS 6 is also supported as an external Red Hat Ceph Storage cluster. Red Hat Ceph Storage 7 RHOSP 17.1.3 adds support for RHCS 7 as an external Red Hat Ceph Storage cluster. Availability zones for file shares In RHOSP 17.1, cloud administrators can configure availability zones for Shared File Systems service (manila) back ends. Manage/unmanage file shares In RHOSP 17.1, cloud administrators can bring shares that are created outside the Shared File Systems service (manila) under the management of the Shared File Systems service and remove shares from the Shared File Systems service without deleting them. The CephFS driver does not support this feature. You can use this manage/unmanage functionality when commissioning, decommissioning, or migrating storage systems, or to take shares offline temporarily for maintenance. Block Storage supports NVMe over TCP back ends In RHOSP 17.1, the Block Storage service (cinder) supports NVMe over TCP (NVMe/TCP) drivers, for Compute nodes that are running RHEL 9. Active-active configuration for the Block Storage backup service In RHOSP 17.1, the Block Storage (cinder) backup service is deployed using an active-active configuration. For more information, see Deploying your active-active Block Storage backup service . Other Block Storage backup service improvements In RHOSP 17.1, the Block Storage (cinder) backup service supports the S3 back end and the zstd data compression algorithm. For more information, see Backup repository back-end configuration and Block Storage backup service configuration . New Dell PowerFlex and PowerStore drivers The Shared File Systems service (manila) now includes back-end drivers to provision and manage NFS shares on Dell PowerFlex storage systems, and NFS and CIFS shares on Dell PowerStore storage systems. The use of these drivers is supported when the vendor publishes certification on the Ecosystem Catalog. 2.9. Upgrades and updates This section outlines the top new features for Red Hat OpenStack Platform (RHOSP) upgrades and updates. Pre-update and post-update validations In RHOSP 17.1.1, pre-update and post-update validations are now supported. With this enhancement, you can verify the requirements and functionality of your undercloud before you begin a minor update of your environment. You can then verify the overcloud functionality after you perform a minor update. For more information, see Validating RHOSP before the undercloud update and Validating RHOSP after the overcloud update in Performing a minor update of Red Hat OpenStack Platform . Multi-RHEL In RHOSP 17.1, you can upgrade a portion of your Compute nodes to RHEL 9.2 while the rest of your Compute nodes remain on RHEL 8.4. This is referred to as a Multi-RHEL environment. For more information about the benefits and limitations of a Multi-RHEL environment, see Planning for a Compute node upgrade in Framework for upgrades (16.2 to 17.1) . 2.10. Development and technology previews This section provides an overview of the top new development and technology previews in this release of Red Hat OpenStack Platform (RHOSP). Nmstate support Warning This feature is available in this release as a Development Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. In the upgrade to RHOSP 17.1.4, there is an optional migration from the deprecated ifcfg-scripts to Nmstate, the declarative network manager API. Customers interested in migrating to Red Hat OpenStack Services on OpenShift, the release of RHOSP, must adopt Nmstate because ifcfg-scripts will eventually be removed. For more information, see Introduction to Nmstate . The Nmstate migration causes down time, so you must plan the migration for a maintenance window. For more information, see Framework for upgrades (16.2 to 17.1) . Router flavors Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . The router flavors feature lets you define router flavors and use them to create custom virtual routers. For more information, see Creating custom virtual routers with router flavors .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/chap-top-new-features_rhosp-relnotes
|
Chapter 61. JSLT
|
Chapter 61. JSLT Since Camel 3.1 Only producer is supported The JSLT component allows you to process a JSON messages using an JSLT expression. This can be ideal when doing JSON to JSON transformation or querying data. 61.1. Dependencies When using jslt with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency> 61.2. URI format Where specName is the classpath-local URI of the specification to invoke; or the complete URL of the remote specification (eg: file://folder/myfile.vm ). 61.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 61.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 61.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 61.4. Component Options The JSLT component supports 5 options, which are listed below. Name Description Default Type allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean functions (advanced) JSLT can be extended by plugging in functions written in Java. Collection objectFilter (advanced) JSLT can be extended by plugging in a custom jslt object filter. JsonFilter 61.4.1. Endpoint Options The JSLT endpoint is configured using URI syntax: with the following path and query parameters: 61.4.1.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 61.4.1.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean mapBigDecimalAsFloats (producer) If true, the mapper will use the USE_BIG_DECIMAL_FOR_FLOATS in serialization features. false boolean objectMapper (producer) Setting a custom JSON Object Mapper to be used. ObjectMapper prettyPrint (common) If true, JSON in output message is pretty printed. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 61.5. Message Headers The JSLT component supports 2 message header(s), which is/are listed below: Name Description Default Type CamelJsltString (producer) Constant: HEADER_JSLT_STRING The JSLT Template as String. String CamelJsltResourceUri (producer) Constant: HEADER_JSLT_RESOURCE_URI The resource URI. String 61.6. Passing values to JSLT Camel can supply exchange information as variables when applying a JSLT expression on the body. The available variables from the Exchange are: name value headers The headers of the In message as a json object exchange.properties The Exchange properties as a json object. exchange is the name of the variable and properties is the path to the exchange properties. Available if allowContextMapAll option is true. All the values that cannot be converted to json with Jackson are denied and will not be available in the jslt expression. For example, the header named "type" and the exchange property "instance" can be accessed like { "type": USDheaders.type, "instance": USDexchange.properties.instance } 61.7. Samples The sample example is as given below. from("activemq:My.Queue"). to("jslt:com/acme/MyResponse.json"); And a file based resource: from("activemq:My.Queue"). to("jslt:file://myfolder/MyResponse.json?contentCache=true"). to("activemq:Another.Queue"); You can also specify which JSLT expression the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelJsltResourceUri").constant("path/to/my/spec.json"). to("jslt:dummy?allowTemplateFromHeader=true"); Or send whole jslt expression via header: (suitable for querying) from("direct:in"). setHeader("CamelJsltString").constant(".published"). to("jslt:dummy?allowTemplateFromHeader=true"); Passing exchange properties to the jslt expression can be done like this from("direct:in"). to("jslt:com/acme/MyResponse.json?allowContextMapAll=true"); 61.8. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.jslt.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.jslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jslt.enabled Whether to enable auto configuration of the jslt component. This is enabled by default. Boolean camel.component.jslt.functions JSLT can be extended by plugging in functions written in Java. Collection camel.component.jslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jslt.object-filter JSLT can be extended by plugging in a custom jslt object filter. The option is a com.schibsted.spt.data.jslt.filters.JsonFilter type. JsonFilter
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency>",
"jslt:specName[?options]",
"jslt:resourceUri",
"{ \"type\": USDheaders.type, \"instance\": USDexchange.properties.instance }",
"from(\"activemq:My.Queue\"). to(\"jslt:com/acme/MyResponse.json\");",
"from(\"activemq:My.Queue\"). to(\"jslt:file://myfolder/MyResponse.json?contentCache=true\"). to(\"activemq:Another.Queue\");",
"from(\"direct:in\"). setHeader(\"CamelJsltResourceUri\").constant(\"path/to/my/spec.json\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");",
"from(\"direct:in\"). setHeader(\"CamelJsltString\").constant(\".published\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");",
"from(\"direct:in\"). to(\"jslt:com/acme/MyResponse.json?allowContextMapAll=true\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jslt-component-starter
|
Chapter 52. Updated Drivers
|
Chapter 52. Updated Drivers Storage Driver Updates The QLogic Fibre Channel HBA driver (qla2xxx.ko.xz) has been updated to version 9.00.00.00.07.5-k1. The Cisco FCoE HBA Driver driver (fnic.ko.xz) has been updated to version 1.6.0.34. The Emulex OneConnectOpen-iSCSI driver (be2iscsi.ko.xz) has been updated to version 11.4.0.1. The QLogic FCoE driver (bnx2fc.ko.xz) has been updated to version 2.11.8. The Microsemi Smart Family Controller driver (smartpqi.ko.xz) has been updated to version 1.1.2-126. The Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:11.4.0.4. The LSI MPT Fusion SAS 3.0 Device driver (mpt3sas.ko.xz) has been updated to version 16.100.00.00. The QLogic QEDF 25/40/50/100Gb FCoE driver (qedf.ko.xz) has been updated to version 8.20.5.0. The Avago MegaRAID SAS driver (megaraid_sas.ko.xz) has been updated to version 07.702.06.00-rh2. The HP Smart Array Controller driver (hpsa.ko.xz) has been updated to version 3.4.20-0-RH2. Network Driver Updates The Realtek RTL8152/RTL8153 Based USB Ethernet Adapters driver (r8152.ko.xz) has been updated to version v1.08.9. The Intel(R) 10 Gigabit PCI Express Network driver (ixgbe.ko.xz) has been updated to version 5.1.0-k-rh7.5. The Intel(R) Ethernet Switch Host Interface driver (fm10k.ko.xz) has been updated to version 0.21.7-k. The Intel(R) Ethernet Connection XL710 Network driver (i40e.ko.xz) has been updated to version 2.1.14-k. The Intel(R) 10 Gigabit Virtual Function Network driver (ixgbevf.ko.xz) has been updated to version 4.1.0-k-rh7.5. The Intel(R) XL710 X710 Virtual Function Network driver (i40evf.ko.xz) has been updated to version 3.0.1-k. The Elastic Network Adapter (ENA) driver (ena.ko.xz) has been updated to version 1.2.0k. The Cisco VIC Ethernet NIC driver (enic.ko.xz) has been updated to version 2.3.0.42. The Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.8.0. The QLogic FastLinQ 4xxxx Core Module driver (qed.ko.xz) has been updated to version 8.10.11.21. The QLogic 1/10 GbE Converged/Intelligent Ethernet driver (qlcnic.ko.xz) has been updated to version 5.3.66. The Mellanox ConnectX HCA Ethernet driver (mlx4_en.ko.xz) has been updated to version 4.0-0. The Mellanox ConnectX HCA low-level driver (mlx4_core.ko.xz) has been updated to version 4.0-0. The Mellanox Connect-IB, ConnectX-4 core driver (mlx5_core.ko.xz) has been updated to version 5.0-0. Graphics Driver and Miscellaneous Driver Updates The standalone VMware SVGA device drm driver (vmwgfx.ko.xz) has been updated to version 2.14.0.0.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/updated_drivers
|
Chapter 34. Associating secondary interfaces metrics to network attachments
|
Chapter 34. Associating secondary interfaces metrics to network attachments 34.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 34.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 34.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)
|
[
"pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0",
"(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/associating-secondary-interfaces-metrics-to-network-attachments
|
Chapter 1. Introduction to VDO on LVM
|
Chapter 1. Introduction to VDO on LVM The Virtual Data Optimizer (VDO) feature provides inline block-level deduplication, compression, and thin provisioning for storage. You can manage VDO as a type of Logical Volume Manager (LVM) Logical Volumes (LVs), similar to LVM thin-provisioned volumes. VDO volumes on LVM (LVM-VDO) contain the following components: VDO pool LV This is the backing physical device that stores, deduplicates, and compresses data for the VDO LV. The VDO pool LV sets the physical size of the VDO volume, which is the amount of data that VDO can store on the disk. Currently, each VDO pool LV can hold only one VDO LV. As a result, VDO deduplicates and compresses each VDO LV separately. Duplicate data that is stored on separate LVs do not benefit from data optimization of the same VDO volume. VDO LV This is the virtual, provisioned device on top of the VDO pool LV. The VDO LV sets the provisioned, logical size of the VDO volume, which is the amount of data that applications can write to the volume before deduplication and compression occurs. If you are already familiar with the structure of an LVM thin-provisioned implementation, see the following table to understand how the different aspects of VDO are presented to the system. Table 1.1. A comparison of components in VDO on LVM and LVM thin provisioning Physical device Provisioned device VDO on LVM VDO pool LV VDO LV LVM thin provisioning Thin pool Thin volume Since the VDO is thin-provisioned, the file system and applications only see the logical space in use and not the actual available physical space. Use scripting to monitor the available physical space and generate an alert if use exceeds a threshold. For information about monitoring the available VDO space see the Monitoring VDO section. Additional resources Deduplicating and compressing storage Creating a thin logical volume
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_logical_volumes_on_rhel/introduction-to-vdo-on-lvm_deduplicating-and-compressing-logical-volumes-on-rhel
|
Chapter 1. Customizing sample software templates
|
Chapter 1. Customizing sample software templates Learn how to customize ready-to-use software templates for your on-prem environment. Cluster administrators have full control over this process, including modifying metadata and specifications. Prerequisites You have used the forked repository URL from tssc-sample-templates during the RHTAP install process. Procedure Clone your forked repository, and then open it in your preferred text editor, such as Visual Studio Code. Locate the properties file within your project directory. This file stores the default values that can customize. Open it for editing and update the following key-value pairs according to your environment. Key Description export GITHUB_DEFAULT_HOST Set this to your on-prem GitHub host fully qualified domain name. That is, the URL without the HTTP protocol and without the .git extension. For example github-github.apps.cluster-ljg9z.sandbox219.opentlc.com. Default is github.com . export GITLAB_DEFAULT_HOST Set this to your on-prem GitLab host host fully qualified domain name. That is, the URL without the HTTP protocol and without the .git extension. For example gitlab-gitlab.apps.cluster-ljg9z.sandbox219.opentlc.com. Default is gitlab.com . export QUAY_DEFAULT_HOST The default Quay URL correspond to your specific on-prem image registry URL without the HTTP protocol. For example, quay-tv2pb.apps.cluster-tv2pb.sandbox1194.opentlc.com. The default quay host is quay.io . export DEFAULT_DEPLOYMENT_NAMESPACE_PREFIX The namespace prefix for deployments within RHTAP. Default is rhtap-app . Note Update this if you have modified the default trusted-application-pipeline: namespace during the RHTAP installation process. Figure 1.1. The properties file Run the generate.sh script in your terminal. This action adjusts the software templates, replacing default host values with your specified inputs. ./generate.sh Figure 1.2. The generate.sh script Commit and push the changes to your repository. This automatically updates the template in RHDH. Alternatively, you can import and refresh a single or all customized templates directly in RHDH. Go to your forked sample template repository on your Git provider. For a single template, from the templates directory, select select template.yaml . Copy its URL from the browser address bar. For example, https://github.com/<username>/tssc-sample-templates/blob/main/templates/devfile-sample-code-with-quarkus-dance/template.yaml . Otherwise, for all the templates, select all.yaml and copy its URL from the browser address bar. For example, https://github.com/<username>/tssc-sample-templates/blob/main/all.yaml . Switch back to RHDH platform. Select Create > Register Existing Component . In the Select URL field, paste the appropriate URL that you copied in Step 4b. Select Analyze and then select Import to update the templates in RHDH. Verification Consider creating an application to explore the impact of your template customization. Additional resources To customize pipelines, see Customizing sample pipeline templates
|
[
"./generate.sh"
] |
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/customizing_red_hat_trusted_application_pipeline/customizing-sample-software-templates_default
|
23.2. Operating System Booting
|
23.2. Operating System Booting There are a number of different ways to boot virtual machines, including BIOS boot loader, host physical machine boot loader, direct kernel boot, and container boot. 23.2.1. BIOS Boot Loader Booting the BIOS is available for hypervisors supporting full virtualization. In this case, the BIOS has a boot order priority (floppy, hard disk, CD-ROM, network) determining where to locate the boot image. The <os> section of the domain XML contains the following information: ... <os> <type>hvm</type> <boot dev='fd'/> <boot dev='hd'/> <boot dev='cdrom'/> <boot dev='network'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os> ... Figure 23.2. BIOS boot loader domain XML Important Instead of using the <boot dev/> configuration for determining boot device order, Red Hat recommends using the <boot order/> configuration. For an example, see Specifying boot order . The components of this section of the domain XML are as follows: Table 23.2. BIOS boot loader elements Element Description <type> Specifies the type of operating system to be booted on the guest virtual machine. hvm indicates that the operating system is designed to run on bare metal and requires full virtualization. linux refers to an operating system that supports the KVM hypervisor guest ABI. There are also two optional attributes: arch specifies the CPU architecture to virtualization, and machine refers to the machine type. For more information, see the libvirt upstream documentation . <boot> Specifies the boot device to consider with one of the following values: fd , hd , cdrom or network . The boot element can be repeated multiple times to set up a priority list of boot devices to try in turn. Multiple devices of the same type are sorted according to their targets while preserving the order of buses. After defining the domain, its XML configuration returned by libvirt lists devices in the sorted order. Once sorted, the first device is marked as bootable. For more information, see the libvirt upstream documentation . <bootmenu> Determines whether or not to enable an interactive boot menu prompt on guest virtual machine start up. The enable attribute can be either yes or no . If not specified, the hypervisor default is used. <smbios> determines how SMBIOS information is made visible in the guest virtual machine. The mode attribute must be specified, as either emulate (allows the hypervisor generate all values), host (copies all of Block 0 and Block 1, except for the UUID, from the host physical machine's SMBIOS values; the virConnectGetSysinfo call can be used to see what values are copied), or sysinfo (uses the values in the sysinfo element). If not specified, the hypervisor's default setting is used. <bios> This element has attribute useserial with possible values yes or no . The attribute enables or disables the Serial Graphics Adapter, which enables users to see BIOS messages on a serial port. Therefore, one needs to have serial port defined. The rebootTimeout attribute controls whether and after how long the guest virtual machine should start booting again in case the boot fails (according to the BIOS). The value is set in milliseconds with a maximum of 65535 ; setting -1 disables the reboot. 23.2.2. Direct Kernel Boot When installing a new guest virtual machine operating system, it is often useful to boot directly from a kernel and initrd stored in the host physical machine operating system, allowing command-line arguments to be passed directly to the installer. This capability is usually available for both fully virtualized and paravirtualized guest virtual machines. ... <os> <type>hvm</type> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os> ... Figure 23.3. Direct kernel boot The components of this section of the domain XML are as follows: Table 23.3. Direct kernel boot elements Element Description <type> Same as described in the BIOS boot section. <kernel> Specifies the fully-qualified path to the kernel image in the host physical machine operating system. <initrd> Specifies the fully-qualified path to the (optional) ramdisk image in the host physical machine operating system. <cmdline> Specifies arguments to be passed to the kernel (or installer) at boot time. This is often used to specify an alternate primary console (such as a serial port), or the installation media source or kickstart file. 23.2.3. Container Boot When booting a domain using container-based virtualization, instead of a kernel or boot image, a path to the init binary is required, using the init element. By default, this will be launched with no arguments. To specify the initial argv , use the initarg element, repeated as many times as required. The cmdline element provides an equivalent to /proc/cmdline but will not affect <initarg> . ... <os> <type arch='x86_64'>exe</type> <init>/bin/systemd</init> <initarg>--unit</initarg> <initarg>emergency.service</initarg> </os> ... Figure 23.4. Container boot
|
[
"<os> <type>hvm</type> <boot dev='fd'/> <boot dev='hd'/> <boot dev='cdrom'/> <boot dev='network'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os>",
"<os> <type>hvm</type> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os>",
"<os> <type arch='x86_64'>exe</type> <init>/bin/systemd</init> <initarg>--unit</initarg> <initarg>emergency.service</initarg> </os>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Operating_system_booting
|
Chapter 319. Spring Batch Component
|
Chapter 319. Spring Batch Component Available as of Camel version 2.10 The spring-batch: component and support classes provide integration bridge between Camel and Spring Batch infrastructure. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-batch</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 319.1. URI format spring-batch:jobName[?options] Where jobName represents the name of the Spring Batch job located in the Camel registry. Alternatively if a JobRegistry is provided it will be used to locate the job instead. WARNING:This component can only be used to define producer endpoints, which means that you cannot use the Spring Batch component in a from() statement. 319.2. Options The Spring Batch component supports 3 options, which are listed below. Name Description Default Type jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Spring Batch endpoint is configured using URI syntax: with the following path and query parameters: 319.2.1. Path Parameters (1 parameters): Name Description Default Type jobName Required The name of the Spring Batch job located in the registry. String 319.2.2. Query Parameters (4 parameters): Name Description Default Type jobFromHeader (producer) Explicitly defines if the jobName should be taken from the headers instead of the URI. false boolean jobLauncher (producer) Explicitly specifies a JobLauncher to be used. JobLauncher jobRegistry (producer) Explicitly specifies a JobRegistry to be used. JobRegistry synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 319.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.spring-batch.enabled Enable spring-batch component true Boolean camel.component.spring-batch.job-launcher Explicitly specifies a JobLauncher to be used. The option is a org.springframework.batch.core.launch.JobLauncher type. String camel.component.spring-batch.job-registry Explicitly specifies a JobRegistry to be used. The option is a org.springframework.batch.core.configuration.JobRegistry type. String camel.component.spring-batch.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 319.4. Usage When Spring Batch component receives the message, it triggers the job execution. The job will be executed using the org.springframework.batch.core.launch.JobLaucher instance resolved according to the following algorithm: if JobLauncher is manually set on the component, then use it. if jobLauncherRef option is set on the component, then search Camel Registry for the JobLauncher with the given name. Deprecated and will be removed in Camel 3.0! if there is JobLauncher registered in the Camel Registry under jobLauncher name, then use it. if none of the steps above allow to resolve the JobLauncher and there is exactly one JobLauncher instance in the Camel Registry, then use it. All headers found in the message are passed to the JobLauncher as job parameters. String , Long , Double and java.util.Date values are copied to the org.springframework.batch.core.JobParametersBuilder - other data types are converted to Strings. 319.5. Examples Triggering the Spring Batch job execution: from("direct:startBatch").to("spring-batch:myJob"); Triggering the Spring Batch job execution with the JobLauncher set explicitly. from("direct:startBatch").to("spring-batch:myJob?jobLauncherRef=myJobLauncher"); Starting from the Camel 2.11.1 JobExecution instance returned by the JobLauncher is forwarded by the SpringBatchProducer as the output message. You can use the JobExecution instance to perform some operations using the Spring Batch API directly. from("direct:startBatch").to("spring-batch:myJob").to("mock:JobExecutions"); ... MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus(); 319.6. Support classes Apart from the Component, Camel Spring Batch provides also support classes, which can be used to hook into Spring Batch infrastructure. 319.6.1. CamelItemReader CamelItemReader can be used to read batch data directly from the Camel infrastructure. For example the snippet below configures Spring Batch to read data from JMS queue. <bean id="camelReader" class="org.apache.camel.component.spring.batch.support.CamelItemReader"> <constructor-arg ref="consumerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="camelReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 319.6.2. CamelItemWriter CamelItemWriter has similar purpose as CamelItemReader , but it is dedicated to write chunk of the processed data. For example the snippet below configures Spring Batch to read data from JMS queue. <bean id="camelwriter" class="org.apache.camel.component.spring.batch.support.CamelItemWriter"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:dataQueue"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="camelwriter" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 319.6.3. CamelItemProcessor CamelItemProcessor is the implementation of Spring Batch org.springframework.batch.item.ItemProcessor interface. The latter implementation relays on Request Reply pattern to delegate the processing of the batch item to the Camel infrastructure. The item to process is sent to the Camel endpoint as the body of the message. For example the snippet below performs simple processing of the batch item using the Direct endpoint and the Simple expression language . <camel:camelContext> <camel:route> <camel:from uri="direct:processor"/> <camel:setExchangePattern pattern="InOut"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id="camelProcessor" class="org.apache.camel.component.spring.batch.support.CamelItemProcessor"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="direct:processor"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" processor="camelProcessor" commit-interval="100"/> </batch:tasklet> </batch:step> </batch:job> 319.6.4. CamelJobExecutionListener CamelJobExecutionListener is the implementation of the org.springframework.batch.core.JobExecutionListener interface sending job execution events to the Camel endpoint. The org.springframework.batch.core.JobExecution instance produced by the Spring Batch is sent as a body of the message. To distinguish between before- and after-callbacks SPRING_BATCH_JOB_EVENT_TYPE header is set to the BEFORE or AFTER value. The example snippet below sends Spring Batch job execution events to the JMS queue. <bean id="camelJobExecutionListener" class="org.apache.camel.component.spring.batch.support.CamelJobExecutionListener"> <constructor-arg ref="producerTemplate"/> <constructor-arg value="jms:batchEventsBus"/> </bean> <batch:job id="myJob"> <batch:step id="step"> <batch:tasklet> <batch:chunk reader="someReader" writer="someWriter" commit-interval="100"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref="camelJobExecutionListener"/> </batch:listeners> </batch:job> 319.7. Spring Cloud Available as of Camel 2.19 Spring Cloud component Maven users will need to add the following dependency to their pom.xml in order to use this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> camel-spring-cloud jar comes with the spring.factories file, so as soon as you add that dependency into your classpath, Spring Boot will automatically auto-configure Camel for you. 319.7.1. Camel Spring Cloud Starter Available as of Camel 2.19 To use the starter, add the following to your spring boot pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> 319.8. Spring Cloud Consul Available as of Camel 2.22 319.9. Spring Cloud Zookeeper Available as of Camel 2.22 319.10. Spring Cloud Netflix Available as of Camel 2.19 The Spring Cloud Netflix component bridges Camel Cloud and Spring Cloud Netflix so you can leverage Spring Cloud Netflix service discovery and load balance features in Camel and/or you can use Camel Service Discovery implementations as ServerList source for Spring Cloud Netflix's Ribbon load balabncer. Maven users will need to add the following dependency to their pom.xml in order to use this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-netflix</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> camel-spring-cloud-netflix jar comes with the spring.factories file, so as soon as you add that dependency into your classpath, Spring Boot will automatically auto-configure Camel for you. You can disable Camel Spring Cloud Netflix with the following properties: # Enable/Disable the whole integration, default true camel.cloud.netflix = true # Enable/Disable the integration with Ribbon, default true camel.cloud.netflix.ribbon = true 319.11. Spring Cloud Netflix Starter Available as of Camel 2.19 To use the starter, add the following to your spring boot pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-netflix-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-batch</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"spring-batch:jobName[?options]",
"spring-batch:jobName",
"from(\"direct:startBatch\").to(\"spring-batch:myJob\");",
"from(\"direct:startBatch\").to(\"spring-batch:myJob?jobLauncherRef=myJobLauncher\");",
"from(\"direct:startBatch\").to(\"spring-batch:myJob\").to(\"mock:JobExecutions\"); MockEndpoint mockEndpoint = ...; JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class); BatchStatus currentJobStatus = jobExecution.getStatus();",
"<bean id=\"camelReader\" class=\"org.apache.camel.component.spring.batch.support.CamelItemReader\"> <constructor-arg ref=\"consumerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"camelReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>",
"<bean id=\"camelwriter\" class=\"org.apache.camel.component.spring.batch.support.CamelItemWriter\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:dataQueue\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"camelwriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>",
"<camel:camelContext> <camel:route> <camel:from uri=\"direct:processor\"/> <camel:setExchangePattern pattern=\"InOut\"/> <camel:setBody> <camel:simple>Processed USD{body}</camel:simple> </camel:setBody> </camel:route> </camel:camelContext> <bean id=\"camelProcessor\" class=\"org.apache.camel.component.spring.batch.support.CamelItemProcessor\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"direct:processor\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" processor=\"camelProcessor\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> </batch:job>",
"<bean id=\"camelJobExecutionListener\" class=\"org.apache.camel.component.spring.batch.support.CamelJobExecutionListener\"> <constructor-arg ref=\"producerTemplate\"/> <constructor-arg value=\"jms:batchEventsBus\"/> </bean> <batch:job id=\"myJob\"> <batch:step id=\"step\"> <batch:tasklet> <batch:chunk reader=\"someReader\" writer=\"someWriter\" commit-interval=\"100\"/> </batch:tasklet> </batch:step> <batch:listeners> <batch:listener ref=\"camelJobExecutionListener\"/> </batch:listeners> </batch:job>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-netflix</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>",
"Enable/Disable the whole integration, default true camel.cloud.netflix = true Enable/Disable the integration with Ribbon, default true camel.cloud.netflix.ribbon = true",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring-cloud-netflix-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/spring-batch-component
|
Chapter 3. Performing Additional Configuration on Capsule Server
|
Chapter 3. Performing Additional Configuration on Capsule Server Use this chapter to configure additional settings on your Capsule Server. 3.1. Configuring Capsule for Host Registration and Provisioning Use this procedure to configure Capsule so that you can register and provision hosts using your Capsule Server instead of your Satellite Server. Procedure On Satellite Server, add the Capsule to the list of trusted proxies. This is required for Satellite to recognize hosts' IP addresses forwarded over the X-Forwarded-For HTTP header set by Capsule. For security reasons, Satellite recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of Capsules, or network ranges. Warning Do not use a network range that is too wide, because that poses a potential security risk. Enter the following command. Note that the command overwrites the list that is currently stored in Satellite. Therefore, if you have set any trusted proxies previously, you must include them in the command as well: The localhost entries are required, do not omit them. Verification List the current trusted proxies using the full help of Satellite installer: The current listing contains all trusted proxies you require. 3.2. Enabling Katello Agent on External Capsules Remote Execution is the primary method of managing packages on Content Hosts. To be able to use the deprecated Katello Agent it must be enabled on each Capsule. Procedure To enable Katello Agent infrastructure, enter the following command: 3.3. Enabling OpenSCAP on Capsule Servers On Satellite Server and the integrated Capsule of your Satellite Server, OpenSCAP is enabled by default. To use the OpenSCAP plug-in and content on external Capsules, you must enable OpenSCAP on each Capsule. Procedure To enable OpenSCAP, enter the following command: If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Managing Configurations Using Puppet Integration in Red Hat Satellite . 3.4. Adding Life Cycle Environments to Capsule Servers If your Capsule Server has the content functionality enabled, you must add an environment so that Capsule can synchronize content from Satellite Server and provide content to host systems. Do not assign the Library lifecycle environment to your Capsule Server because it triggers an automated Capsule sync every time the CDN updates a repository. This might consume multiple system resources on Capsules, network bandwidth between Satellite and Capsules, and available disk space on Capsules. You can use Hammer CLI on Satellite Server or the Satellite web UI. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule that you want to add a life cycle to. Click Edit and click the Life Cycle Environments tab. From the left menu, select the life cycle environments that you want to add to Capsule and click Submit . To synchronize the content on the Capsule, click the Overview tab and click Synchronize . Select either Optimized Sync or Complete Sync . For definitions of each synchronization type, see Recovering a Repository . CLI procedure To display a list of all Capsule Servers, on Satellite Server, enter the following command: Note the Capsule ID of the Capsule that you want to add a life cycle to. Using the ID, verify the details of your Capsule: To view the life cycle environments available for your Capsule Server, enter the following command and note the ID and the organization name: Add the life cycle environment to your Capsule Server: Repeat for each life cycle environment you want to add to Capsule Server. Synchronize the content from Satellite to Capsule. To synchronize all content from your Satellite Server environment to Capsule Server, enter the following command: To synchronize a specific life cycle environment from your Satellite Server to Capsule Server, enter the following command: 3.5. Enabling Power Management on Managed Hosts To perform power management tasks on managed hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on Capsule Server. Prerequisites All managed hosts must have a network interface of BMC type. Capsule Server uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in the Managing Hosts guide. Procedure To enable BMC, enter the following command: 3.6. Configuring DNS, DHCP, and TFTP on Capsule Server To configure the DNS, DHCP, and TFTP services on Capsule Server, use the satellite-installer command with the options appropriate for your environment. To view a complete list of configurable options, enter the satellite-installer --scenario satellite --help command. Any changes to the settings require entering the satellite-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values. To use external DNS, DHCP, and TFTP services instead, see Chapter 4, Configuring Capsule Server with External Services . Adding Multihomed DHCP details If you want to use Multihomed DHCP, you must inform the installer. Prerequisites You must have the correct network name ( dns-interface ) for the DNS server. You must have the correct interface name ( dhcp-interface ) for the DHCP server. Contact your network administrator to ensure that you have the correct settings. Procedure Enter the satellite-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services: For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in the Provisioning guide.
|
[
"satellite-installer --foreman-trusted-proxies \"127.0.0.1/8\" --foreman-trusted-proxies \"::1\" --foreman-trusted-proxies \" My_IP_address \" --foreman-trusted-proxies \" My_IP_range \"",
"satellite-installer --full-help | grep -A 2 \"trusted-proxies\"",
"satellite-installer --scenario capsule --foreman-proxy-content-enable-katello-agent=true",
"satellite-installer --scenario capsule --enable-foreman-proxy-plugin-openscap --foreman-proxy-plugin-openscap-puppet-module true",
"hammer capsule list",
"hammer capsule info --id capsule_id",
"hammer capsule content available-lifecycle-environments --id capsule_id",
"hammer capsule content add-lifecycle-environment --id capsule_id --organization \" My_Organization \" --lifecycle-environment-id lifecycle-environment_id",
"hammer capsule content synchronize --id capsule_id",
"hammer capsule content synchronize --id external_capsule_id --lifecycle-environment-id lifecycle-environment_id",
"satellite-installer --scenario capsule --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --scenario capsule --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-additional-interfaces eth1 --foreman-proxy-dhcp-additional-interfaces eth2 --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_capsule_server/performing-additional-configuration-on-capsule-server
|
Chapter 8. Using the Streams API for code execution
|
Chapter 8. Using the Streams API for code execution Efficiently process data stored in Data Grid caches using the Streams API.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/streams
|
Access control and user management
|
Access control and user management Red Hat OpenShift GitOps 1.15 Configuring user authentication and access controls for users and namespaces Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/access_control_and_user_management/index
|
6.3. Rebooting or Resetting a Virtual Machine
|
6.3. Rebooting or Resetting a Virtual Machine You can restart your virtual machines in two different ways; either using reboot or reset. Several situations can occur where you need to reboot the virtual machine, such as after an update or configuration change. When you reboot, the virtual machine's console remains open while the guest operating system is restarted. If a guest operating system can not be loaded or has become unresponsive, you need to reset the virtual machine. When you reset, the virtual machine's console remains open while the guest operating system is restarted. Note The reset reset operation can only be performed from the Administration Portal. Rebooting a Virtual Machine To reboot a virtual machine: Click Compute Virtual Machines and select a running virtual machine. Click Reboot or right-click the virtual machine and select Reboot from the pop-up menu. Click OK in the Reboot Virtual Machine(s) confirmation window. Resetting a Virtual Machine To reset a virtual machine: Click Compute Virtual Machines and select a running virtual machine. Click the down arrow to Reboot , then click Reset , or right-click the virtual machine and select Reset from the pop-up menu. Click OK in the Reset Virtual Machine(s) confirmation window. During reboot and reset operations, the Status of the virtual machine changes to Reboot In Progress before returning to Up .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Rebooting_a_Virtual_Machine
|
Chapter 12. Configuring RBAC policies
|
Chapter 12. Configuring RBAC policies 12.1. Overview of RBAC policies Role-based access control (RBAC) policies in OpenStack Networking allow granular control over shared neutron networks. OpenStack Networking uses a RBAC table to control sharing of neutron networks among projects, allowing an administrator to control which projects are granted permission to attach instances to a network. As a result, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project. 12.2. Creating RBAC policies This example procedure demonstrates how to use a role-based access control (RBAC) policy to grant a project access to a shared network. View the list of available networks: View the list of projects: Create a RBAC entry for the web-servers network that grants access to the auditors project ( 4b0b98f8c6c040f38ba4f7146e8680f5 ): As a result, users in the auditors project can connect instances to the web-servers network. 12.3. Reviewing RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac-show command to view the details of a specific RBAC entry: 12.4. Deleting RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac delete command to delete the RBAC, using the ID of the RBAC that you want to delete: 12.5. Granting RBAC policy access for external networks You can grant role-based access control (RBAC) policy access to external networks (networks with gateway interfaces attached) using the --action access_as_external parameter. Complete the steps in the following example procedure to create a RBAC for the web-servers network and grant access to the engineering project (c717f263785d4679b16a122516247deb): Create a new RBAC policy using the --action access_as_external option: As a result, users in the engineering project are able to view the network or connect instances to it:
|
[
"openstack network list +--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+",
"openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+",
"openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709 +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709",
"openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+",
"openstack network list +--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/config-rbac-policies_rhosp-network
|
Chapter 14. CertAndKeySecretSource schema reference
|
Chapter 14. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , KafkaClientAuthenticationTls Property Property type Description certificate string The name of the file certificate in the Secret. key string The name of the private key in the Secret. secretName string The name of the Secret containing the certificate.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-CertAndKeySecretSource-reference
|
Chapter 1. Introducing V2V
|
Chapter 1. Introducing V2V Warning The Red Hat Enterprise Linux 6 version of the virt-v2v utility has been deprecated. Users of Red Hat Enterprise Linux 6 are advised to create a Red Hat Enterprise 7 virtual machine, and install virt-v2v in that virtual machine. The Red Hat Enterprise Linux 7 version is fully supported and documented in virt-v2v Knowledgebase articles . V2V is an acronym for virtual to virtual, referring to the process of importing virtual machines from one virtualization platform to another. Red Hat Enterprise Virtualization and Red Hat Enterprise Linux are capable of performing V2V operations using the virt-v2v command. 1.1. What is virt-v2v? The virt-v2v command converts virtual machines from a foreign hypervisor to run on KVM, managed by Red Hat Enterprise Virtualization or libvirt. virt-v2v can currently convert virtual machines running Red Hat Enterprise Linux and Windows on Xen, KVM and VMware ESX / ESX(i) hypervisors. virt-v2v enables paravirtualized ( virtio ) drivers in the converted virtual machine if possible. The following guest operating systems are supported by virt-v2v : Supported guest operating systems: Red Hat Enterprise Linux 3.9 Red Hat Enterprise Linux 4 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 Windows XP Windows Vista Windows 7 Windows Server 2003 Windows Server 2008 All minor releases of the above guest operating systems are supported by virt-v2v . The following source hypervisors are supported by virt-v2v : Supported source hypervisors: Unless otherwise specified, all minor releases of the following source hypervisors are supported by virt-v2v : Xen - all versions released by Red Hat KVM - all versions released by Red Hat VMware ESX / ESX(i) - versions 3.5, 4.0, 4.1, 5.0, 5.1
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/chap-introducing_v2v
|
Chapter 19. Migration of a DRL service to a Red Hat build of Kogito microservice
|
Chapter 19. Migration of a DRL service to a Red Hat build of Kogito microservice You can build and deploy a sample project in Red Hat build of Kogito to expose a stateless rules evaluation of the decision engine in a Red Hat build of Quarkus REST endpoint, and migrate the REST endpoint to Red Hat build of Kogito. The stateless rule evaluation is a single execution of a rule set in Red Hat Process Automation Manager and can be identified as a function invocation. In the invoked function, the output values are determined using the input values. Also, the invoked function uses the decision engine to perform the jobs. Therefore, in such cases, a function is exposed using a REST endpoint and converted into a microservice. After converting into a microservice, a function is deployed into a Function as a Service environment to eliminate the cost of JVM startup time. 19.1. Major changes and migration considerations The following table describes the major changes and features that affect migration from the KIE Server API and KJAR to Red Hat build of Kogito deployments: Table 19.1. DRL migration considerations Feature In KIE Server API In Red Hat build of Kogito with legacy API support In Red Hat build of Kogito artifact DRL files stored in src/main/resources folder of KJAR. copy as is to src/main/resources folder. rewrite using the rule units and OOPath. KieContainer configured using a system property or kmodule.xml file. replaced by KieRuntimeBuilder . not required. KieBase or KieSession configured using a system property or kmodule.xml file. configured using a system property or kmodule.xml file. replaced by rule units. 19.2. Migration strategy In Red Hat Process Automation Manager, you can migrate a rule evaluation to a Red Hat build of Kogito deployment in the following two ways: Using legacy API in Red Hat build of Kogito In Red Hat build of Kogito, the kogito-legacy-api module makes the legacy API of Red Hat Process Automation Manager available; therefore, the DRL files remain unchanged. This approach of migrating rule evaluation requires minimal changes and enables you to use major Red Hat build of Quarkus features, such as hot reload and native image creation. Migrating to Red Hat build of Kogito rule units Migrating to Red Hat build of Kogito rule units include the programming model of Red Hat build of Kogito, which is based on the concept of rule units. A rule unit in Red Hat build of Kogito includes both a set of rules and the facts, against which the rules are matched. Rule units in Red Hat build of Kogito also come with data sources. A rule unit data source is a source of the data processed by a given rule unit and represents the entry point, which is used to evaluate the rule unit. Rule units use two types of data sources: DataStream : This is an append-only data source and the facts added into the DataStream cannot be updated or removed. DataStore : This data source is for modifiable data. You can update or remove an object using the FactHandle that is returned when the object is added into the DataStore . Overall, a rule unit contains two parts: The definition of the fact to be evaluated and the set of rules evaluating the facts. 19.3. Example loan application project In the following sections, a loan application project is used as an example to migrate a DRL project to Red Hat build of Kogito deployments. The domain model of the loan application project is made of two classes, the LoanApplication class and the Applicant class: Example LoanApplication class public class LoanApplication { private String id; private Applicant applicant; private int amount; private int deposit; private boolean approved = false; public LoanApplication(String id, Applicant applicant, int amount, int deposit) { this.id = id; this.applicant = applicant; this.amount = amount; this.deposit = deposit; } } Example Applicant class public class Applicant { private String name; private int age; public Applicant(String name, int age) { this.name = name; this.age = age; } } The rule set is created using business decisions to approve or reject an application, along with the last rule of collecting all the approved applications in a list. Example rule set in loan application 19.3.1. Exposing rule evaluation with a REST endpoint using Red Hat build of Quarkus You can expose the rule evaluation that is developed in Business Central with a REST endpoint using Red Hat build of Quarkus. Procedure Create a new module based on the module that contains the rules and Quarkus libraries, providing the REST support: Example dependencies for creating a new module Create a REST endpoint. The following is an example setup for creating a REST endpoint: Example FindApprovedLoansEndpoint endpoint setup @Path("/find-approved") public class FindApprovedLoansEndpoint { private static final KieContainer kContainer = KieServices.Factory.get().newKieClasspathContainer(); @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kContainer.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal("approvedApplications", approvedApplications); session.setGlobal("maxAmount", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } } In the example, a KieContainer containing the rules is created and added into a static field. The rules in the KieContainer are obtained from the other module in the class path. Using this approach, you can reuse the same KieContainer for subsequent invocations related to the FindApprovedLoansEndpoint endpoint without recompiling the rules. Note The two modules are consolidated in the process of migrating rule units to a Red Hat build of Kogito microservice using legacy API. For more information, see Migrating DRL rules units to Red Hat build of Kogito microservice using legacy API . When the FindApprovedLoansEndpoint endpoint is invoked, a new KieSession is created from the KieContainer . The KieSession is populated with the objects from LoanAppDto resulting from the unmarshalling of a JSON request. Example LoanAppDto class public class LoanAppDto { private int maxAmount; private List<LoanApplication> loanApplications; public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } public List<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(List<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } } When the fireAllRules() method is called, KieSession is fired and the business logic is evaluated against the input data. After business logic evaluation, the last rule collects all the approved applications in a list and the same list is returned as an output. Start the Red Hat build of Quarkus application. Invoke the FindApprovedLoansEndpoint endpoint with a JSON request that contains the loan applications to be checked. The value of the maxAmount is used in the rules as shown in the following example: Example curl request Example JSON response [ { "id": "ABC10001", "applicant": { "name": "John", "age": 45 }, "amount": 2000, "deposit": 1000, "approved": true } ] Note Using this approach, you cannot use the hot reload feature and cannot create a native image of the project. In the steps, the missing Quarkus features are provided by the Kogito extension that enables Quarkus aware of the DRL files and implement the hot reload feature in a similar way. 19.3.2. Migrating a rule evaluation to a Red Hat build of Kogito microservice using legacy API After exposing a rule evaluation with a REST endpoint, you can migrate the rule evaluation to a Red Hat build of Kogito microservice using legacy API. Procedure Add the following dependencies to the project pom.xml file to enable the use of Red Hat build of Quarkus and legacy API: Example dependencies for using Quarkus and legacy API Rewrite the REST endpoint implementation: Example REST endpoint implementation @Path("/find-approved") public class FindApprovedLoansEndpoint { @Inject KieRuntimeBuilder kieRuntimeBuilder; @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kieRuntimeBuilder.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal("approvedApplications", approvedApplications); session.setGlobal("maxAmount", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } } In the rewritten REST endpoint implementation, instead of creating the KieSession from the KieContainer , the KieSession is created automatically using an integrated KieRuntimeBuilder . The KieRuntimeBuilder is an interface provided by the kogito-legacy-api module that replaces the KieContainer . Using KieRuntimeBuilder , you can create KieBases and KieSessions in a similar way you create in KieContainer . Red Hat build of Kogito automatically generates an implementation of KieRuntimeBuilder interface at compile time and integrates the KieRuntimeBuilder into a class, which implements the FindApprovedLoansEndpoint REST endpoint. Start your Red Hat build of Quarkus application in development mode. You can also use the hot reload to make the changes to the rules files that are applied to the running application. Also, you can create a native image of your rule based application. 19.3.3. Implementing rule units and automatic REST endpoint generation After migrating rule units to a Red Hat build of Kogito microservice, you can implement the rule units and automatic generation of the REST endpoint. In Red Hat build of Kogito, a rule unit contains a set of rules and the facts, against which the rules are matched. Rule units in Red Hat build of Kogito also come with data sources. A rule unit data source is a source of the data processed by a given rule unit and represents the entry point, which is used to evaluate the rule unit. Rule units use two types of data sources: DataStream : This is an append-only data source. In DataStream , subscribers receive new and past messages, stream can be hot or cold in the reactive streams. Also, the facts added into the DataStream cannot be updated or removed. DataStore : This data source is for modifiable data. You can update or remove an object using the FactHandle that is returned when the object is added into the DataStore . Overall, a rule unit contains two parts: the definition of the fact to be evaluated and the set of rules evaluating the facts. Procedure Implement a fact definition using POJO: Example implementation of a fact definition using POJO package org.kie.kogito.queries; import org.kie.kogito.rules.DataSource; import org.kie.kogito.rules.DataStore; import org.kie.kogito.rules.RuleUnitData; public class LoanUnit implements RuleUnitData { private int maxAmount; private DataStore<LoanApplication> loanApplications; public LoanUnit() { this(DataSource.createStore(), 0); } public LoanUnit(DataStore<LoanApplication> loanApplications, int maxAmount) { this.loanApplications = loanApplications; this.maxAmount = maxAmount; } public DataStore<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(DataStore<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } } In the example, instead of using LoanAppDto the LoanUnit class is bound directly. LoanAppDto is used to marshall or unmarshall JSON requests. Also, the example implements the org.kie.kogito.rules.RuleUnitData interface and uses a DataStore to contain the loan applications to be approved. The org.kie.kogito.rules.RuleUnitData is a marker interface to notify the decision engine that LoanUnit class is part of a rule unit definition. In addition, the DataStore is responsible to allow the rule engine to react on the changes by firing new rules and triggering other rules. Additionally, the consequences of the rules modify the approved property in the example. On the contrary, the maxAmount value is considered as a configuration parameter for the rule unit, which is not modified. The maxAmount is processed automatically during the rules evaluation and automatically set from the value passed in the JSON requests. Implement a DRL file: Example implementation of a DRL file The DRL file that you create must declare the same package as fact definition implementation and a unit with the same name of the Java class. The Java class implements the RuleUnitData interface to state that the interface belongs to the same rule unit. Also, the DRL file in the example is rewritten using the OOPath expressions. In the DRL file, the data source acts as an entry point and the OOPath expression contains the data source name as root. However, the constraints are added in square brackets as follows: USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount ⇐ maxAmount ] Alternatively, you can use the standard DRL syntax, in which you can specify the data source name as an entry point. However, you need to specify the type of the matched object again as shown in the following example, even if the decision engine can infer the type from the data source: USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount ⇐ maxAmount ) from entry-point loanApplications In the example, the last rule that collects all the approved loan applications is replaced by a query that retrieves the list. A rule unit defines the facts to be passed in input to evaluate the rules, and the query defines the expected output from the rule evaluation. Using this approach, Red Hat build of Kogito can automatically generate a class that executes the query and returns the output as shown in the following example: Example LoanUnitQueryFindApproved class public class LoanUnitQueryFindApproved implements org.kie.kogito.rules.RuleUnitQuery<List<org.kie.kogito.queries.LoanApplication>> { private final RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance; public LoanUnitQueryFindApproved(RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance) { this.instance = instance; } @Override public List<org.kie.kogito.queries.LoanApplication> execute() { return instance.executeQuery("FindApproved").stream().map(this::toResult).collect(toList()); } private org.kie.kogito.queries.LoanApplication toResult(Map<String, Object> tuple) { return (org.kie.kogito.queries.LoanApplication) tuple.get("USDl"); } } The following is an example of a REST endpoint that takes a rule unit as input and passing the input to a query executor to return the output: Example LoanUnitQueryFindApprovedEndpoint endpoint @Path("/find-approved") public class LoanUnitQueryFindApprovedEndpoint { @javax.inject.Inject RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit; public LoanUnitQueryFindApprovedEndpoint() { } public LoanUnitQueryFindApprovedEndpoint(RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit) { this.ruleUnit = ruleUnit; } @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<org.kie.kogito.queries.LoanApplication> executeQuery(org.kie.kogito.queries.LoanUnit unit) { RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance = ruleUnit.createInstance(unit); return instance.executeQuery(LoanUnitQueryFindApproved.class); } } Note You can also add multiple queries and for each query, a different REST endpoint is generated. For example, the FindApproved REST endpoint is generated for find-approved.
|
[
"public class LoanApplication { private String id; private Applicant applicant; private int amount; private int deposit; private boolean approved = false; public LoanApplication(String id, Applicant applicant, int amount, int deposit) { this.id = id; this.applicant = applicant; this.amount = amount; this.deposit = deposit; } }",
"public class Applicant { private String name; private int age; public Applicant(String name, int age) { this.name = name; this.age = age; } }",
"global Integer maxAmount; global java.util.List approvedApplications; rule LargeDepositApprove when USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount <= maxAmount ) then modify(USDl) { setApproved(true) }; // loan is approved end rule LargeDepositReject when USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount > maxAmount ) then modify(USDl) { setApproved(false) }; // loan is rejected end // ... more loans approval/rejections business rules rule CollectApprovedApplication when USDl: LoanApplication( approved ) then approvedApplications.add(USDl); // collect all approved loan applications end",
"<dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.example</groupId> <artifactId>drools-project</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependencies>",
"@Path(\"/find-approved\") public class FindApprovedLoansEndpoint { private static final KieContainer kContainer = KieServices.Factory.get().newKieClasspathContainer(); @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kContainer.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal(\"approvedApplications\", approvedApplications); session.setGlobal(\"maxAmount\", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } }",
"public class LoanAppDto { private int maxAmount; private List<LoanApplication> loanApplications; public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } public List<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(List<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } }",
"curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{\"maxAmount\":5000, \"loanApplications\":[ {\"id\":\"ABC10001\",\"amount\":2000,\"deposit\":1000,\"applicant\":{\"age\":45,\"name\":\"John\"}}, {\"id\":\"ABC10002\",\"amount\":5000,\"deposit\":100,\"applicant\":{\"age\":25,\"name\":\"Paul\"}}, {\"id\":\"ABC10015\",\"amount\":1000,\"deposit\":100,\"applicant\":{\"age\":12,\"name\":\"George\"}} ]}' http://localhost:8080/find-approved",
"[ { \"id\": \"ABC10001\", \"applicant\": { \"name\": \"John\", \"age\": 45 }, \"amount\": 2000, \"deposit\": 1000, \"approved\": true } ]",
"<dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-quarkus-rules</artifactId> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-legacy-api</artifactId> </dependency> </dependencies>",
"@Path(\"/find-approved\") public class FindApprovedLoansEndpoint { @Inject KieRuntimeBuilder kieRuntimeBuilder; @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kieRuntimeBuilder.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal(\"approvedApplications\", approvedApplications); session.setGlobal(\"maxAmount\", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } }",
"package org.kie.kogito.queries; import org.kie.kogito.rules.DataSource; import org.kie.kogito.rules.DataStore; import org.kie.kogito.rules.RuleUnitData; public class LoanUnit implements RuleUnitData { private int maxAmount; private DataStore<LoanApplication> loanApplications; public LoanUnit() { this(DataSource.createStore(), 0); } public LoanUnit(DataStore<LoanApplication> loanApplications, int maxAmount) { this.loanApplications = loanApplications; this.maxAmount = maxAmount; } public DataStore<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(DataStore<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } }",
"package org.kie.kogito.queries; unit LoanUnit; // no need to using globals, all variables and facts are stored in the rule unit rule LargeDepositApprove when USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount <= maxAmount ] // oopath style then modify(USDl) { setApproved(true) }; end rule LargeDepositReject when USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount > maxAmount ] then modify(USDl) { setApproved(false) }; end // ... more loans approval/rejections business rules // approved loan applications are now retrieved through a query query FindApproved USDl: /loanApplications[ approved ] end",
"public class LoanUnitQueryFindApproved implements org.kie.kogito.rules.RuleUnitQuery<List<org.kie.kogito.queries.LoanApplication>> { private final RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance; public LoanUnitQueryFindApproved(RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance) { this.instance = instance; } @Override public List<org.kie.kogito.queries.LoanApplication> execute() { return instance.executeQuery(\"FindApproved\").stream().map(this::toResult).collect(toList()); } private org.kie.kogito.queries.LoanApplication toResult(Map<String, Object> tuple) { return (org.kie.kogito.queries.LoanApplication) tuple.get(\"USDl\"); } }",
"@Path(\"/find-approved\") public class LoanUnitQueryFindApprovedEndpoint { @javax.inject.Inject RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit; public LoanUnitQueryFindApprovedEndpoint() { } public LoanUnitQueryFindApprovedEndpoint(RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit) { this.ruleUnit = ruleUnit; } @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<org.kie.kogito.queries.LoanApplication> executeQuery(org.kie.kogito.queries.LoanUnit unit) { RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance = ruleUnit.createInstance(unit); return instance.executeQuery(LoanUnitQueryFindApproved.class); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/con-migrate-drl-to-kogito-loan-overview_migration-kogito-microservices
|
3.13. Attaching an ISO Image to a Virtual Machine
|
3.13. Attaching an ISO Image to a Virtual Machine This Ruby example attaches a CD-ROM to a virtual machine and changes it to an ISO image in order to install the guest operating system. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service # List the first CDROM device: cdrom = cdroms_service.list[0] # Locate the service that manages the CDROM device you just found: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'my_iso_file.iso'. By default this # operation permanently changes the disk that is visible to the # virtual machine after the boot, but it does not have any effect # on the currently running virtual machine. If you want to change the # disk that is visible to the current running virtual machine, change # the `current` parameter's value to `true`. cdrom_service.update( OvirtSDK4::Cdrom.new( file: { id: 'CentOS-7-x86_64-DVD-1511.iso' } ), current: false ) For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4%2FVmService:cdroms_service .
|
[
"Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service List the first CDROM device: cdrom = cdroms_service.list[0] Locate the service that manages the CDROM device you just found: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'my_iso_file.iso'. By default this operation permanently changes the disk that is visible to the virtual machine after the next boot, but it does not have any effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, change the `current` parameter's value to `true`. cdrom_service.update( OvirtSDK4::Cdrom.new( file: { id: 'CentOS-7-x86_64-DVD-1511.iso' } ), current: false )"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/attaching_an_iso_image_to_a_virtual_machine
|
Chapter 18. Managing user access
|
Chapter 18. Managing user access 18.1. Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes Red Hat Advanced Cluster Security for Kubernetes (RHACS) comes with role-based access control (RBAC) that you can use to configure roles and grant various levels of access to Red Hat Advanced Cluster Security for Kubernetes for different users. Beginning with version 3.63, RHACS includes a scoped access control feature that enables you to configure fine-grained and specific sets of permissions that define how a given RHACS user or a group of users can interact with RHACS, which resources they can access, and which actions they can perform. Roles are a collection of permission sets and access scopes. You can assign roles to users and groups by specifying rules. You can configure these rules when you configure an authentication provider. There are two types of roles in Red Hat Advanced Cluster Security for Kubernetes: System roles that are created by Red Hat and cannot be changed. Custom roles, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. Note If you assign multiple roles for a user, they get access to the combined permissions of the assigned roles. If you have users assigned to a custom role, and you delete that role, all associated users transfer to the minimum access role that you have configured. Permission sets are a set of permissions that define what actions a role can perform on a given resource. Resources are the functionalities of Red Hat Advanced Cluster Security for Kubernetes for which you can set view ( read ) and modify ( write ) permissions. There are two types of permission sets in Red Hat Advanced Cluster Security for Kubernetes: System permission sets, which are created by Red Hat and cannot be changed. Custom permission sets, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. Access scopes are a set of Kubernetes and OpenShift Container Platform resources that users can access. For example, you can define an access scope that only allows users to access information about pods in a given project. There are two types of access scopes in Red Hat Advanced Cluster Security for Kubernetes: System access scopes, which are created by Red Hat and cannot be changed. Custom access scopes, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. 18.1.1. System roles Red Hat Advanced Cluster Security for Kubernetes (RHACS) includes some default system roles that you can apply to users when you create rules. You can also create custom roles as required. System role Description Admin This role is targeted for administrators. Use it to provide read and write access to all resources. Analyst This role is targeted for a user who cannot make any changes, but can view everything. Use it to provide read-only access for all resources. Continuous Integration This role is targeted for CI (continuous integration) systems and includes the permission set required to enforce deployment policies. None This role has no read and write access to any resource. You can set this role as the minimum access role for all users. Sensor Creator RHACS uses this role to automate new cluster setups. It includes the permission set to create Sensors in secured clusters. Scope Manager This role includes the minimum permissions required to create and modify access scopes. Vulnerability Management Approver This role allows you to provide access to approve vulnerability deferrals or false positive requests. Vulnerability Management Requester This role allows you to provide access to request vulnerability deferrals or false positives. Vulnerability Report Creator This role allows you to create and manage vulnerability reporting configurations for scheduled vulnerability reports. 18.1.1.1. Viewing the permission set and access scope for a system role You can view the permission set and access scope for the default system roles. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Roles . Click on one of the roles to view its details. The details page shows the permission set and access scope for the slected role. Note You cannot modify permission set and access scope for the default system roles. 18.1.1.2. Creating a custom role You can create new roles from the Access Control view. Prerequisites You must have the Admin role, or read and write permissions for the Access resource to create, modify, and delete custom roles. You must create a permissions set and an access scope for the custom role before creating the role. Procedure In the RHACS portal, go to Platform Configuration Access Control . Select Roles . Click Create role . Enter a Name and Description for the new role. Select a Permission set for the role. Select an Access scope for the role. Click Save . Additional resources Creating a custom permission set Creating a custom access scope 18.1.1.3. Assigning a role to a user or a group You can use the RHACS portal to assign roles to a user or a group. Procedure In the RHACS portal, go to Platform Configuration Access Control . From the list of authentication providers, select the authentication provider. Click Edit minimum role and rules . Under the Rules section, click Add new rule . For Key , select one of the values from userid , name , email or group . For Value , enter the value of the user ID, name, email address or group based on the key you selected. Click the Role drop-down menu and select the role you want to assign. Click Save . You can repeat these instructions for each user or group and assign different roles. 18.1.2. System permission sets Red Hat Advanced Cluster Security for Kubernetes includes some default system permission sets that you can apply to roles. You can also create custom permission sets as required. Permission set Description Admin Provides read and write access to all resources. Analyst Provides read-only access for all resources. Continuous Integration This permission set is targeted for CI (continuous integration) systems and includes the permissions required to enforce deployment policies. Network Graph Viewer Provides the minimum permissions to view network graphs. None No read and write permissions are allowed for any resource. Sensor Creator Provides permissions for resources that are required to create Sensors in secured clusters. 18.1.2.1. Viewing the permissions for a system permission set You can view the permissions for a system permission set in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Permission sets . Click on one of the permission sets to view its details. The details page shows a list of resources and their permissions for the selected permission set. Note You cannot modify permissions for a system permission set. 18.1.2.2. Creating a custom permission set You can create new permission sets from the Access Control view. Prerequisites You must have the Admin role, or read and write permissions for the Access resource to create, modify, and delete permission sets. Procedure In the RHACS portal, go to Platform Configuration Access Control . Select Permission sets . Click Create permission set . Enter a Name and Description for the new permission set. For each resource, under the Access level column, select one of the permissions from No access , Read access , or Read and Write access . Warning If you are configuring a permission set for users, you must grant read-only permissions for the following resources: Alert Cluster Deployment Image NetworkPolicy NetworkGraph WorkflowAdministration Secret These permissions are preselected when you create a new permission set. If you do not grant these permissions, users will experience issues with viewing pages in the RHACS portal. Click Save . 18.1.3. System access scopes Red Hat Advanced Cluster Security for Kubernetes includes some default system access scopes that you can apply on roles. You can also create custom access scopes as required. Acces scope Description Unrestricted Provides access to all clusters and namespaces that Red Hat Advanced Cluster Security for Kubernetes monitors. Deny All Provides no access to any Kubernetes and OpenShift Container Platform resources. 18.1.3.1. Viewing the details for a system access scope You can view the Kubernetes and OpenShift Container Platform resources that are allowed and not allowed for an access scope in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Access scopes . Click on one of the access scopes to view its details. The details page shows a list of clusters and namespaces, and which ones are allowed for the selected access scope. Note You cannot modify allowed resources for a system access scope. 18.1.3.2. Creating a custom access scope You can create new access scopes from the Access Control view. Prerequisites You must have the Admin role, or a role with the permission set with read and write permissions for the Access resource to create, modify, and delete permission sets. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Access scopes . Click Create access scope . Enter a Name and Description for the new access scope. Under the Allowed resources section: Use the Cluster filter and Namespace filter fields to filter the list of clusters and namespaces visible in the list. Expand the Cluster name to see the list of namespaces in that cluster. To allow access to all namespaces in a cluster, toggle the switch in the Manual selection column. Note Access to a specific cluster provides users with access to the following resources within the scope of the cluster: OpenShift Container Platform or Kubernetes cluster metadata and security information Compliance information for authorized clusters Node metadata and security information Access to all namespaces in that cluster and their associated security information To allow access to a namespace, toggle the switch in the Manual selection column for a namespace. Note Access to a specific namespace gives access to the following information within the scope of the namespace: Alerts and violations for deployments Vulnerability data for images Deployment metadata and security information Role and user information Network graph, policy, and baseline information for deployments Process information and process baseline configuration Prioritized risk information for each deployment If you want to allow access to clusters and namespaces based on labels, click Add label selector under the Label selection rules section. Then click Add rule to specify Key and Value pairs for the label selector. You can specify labels for clusters and namespaces. Click Save . 18.1.4. Resource definitions Red Hat Advanced Cluster Security for Kubernetes includes multiple resources. The following table lists the resources and describes the actions that users can perform with the read or write permission. Resource Read permission Write permission Access View configurations for single sign-on (SSO) and role-based access control (RBAC) rules that match user metadata to Red Hat Advanced Cluster Security for Kubernetes roles and users that have accessed your Red Hat Advanced Cluster Security for Kubernetes instance, including the metadata that the authentication providers provide about them. Create, modify, or delete SSO configurations and configured RBAC rules. Administration View the following items: Options for data retention, security notices and other related configurations The current logging verbosity level in Red Hat Advanced Cluster Security for Kubernetes components Manifest content for the uploaded probe files Existing image scanner integrations The status of automatic upgrades Metadata about Red Hat Advanced Cluster Security for Kubernetes service-to-service authentication The content of the scanner bundle (download) Edit the following items: Data retention, security notices, and related configurations The logging level Support packages in Central (upload) Image scanner integrations (create/modify/delete) Automatic upgrades for secured clusters (enable/disable) Service-to-service authentication credentials (revoke/re-issue) Alert View existing policy violations. Resolve or edit policy violations. CVE Internal use only Internal use only Cluster View existing secured clusters. Add new secured clusters and modify or delete existing clusters. Compliance View compliance standards and results, as well as recent compliance runs and the associated completion status. Trigger compliance runs. Deployment View deployments (workloads) in secured clusters. N/A DeploymentExtension View the following items: Process baselines Process activity in deployments Risk results Modify the following items: Process baselines (add or remove processes) Detection Check build-time policies against images or deployment YAML. N/A Image View images, their components, and their vulnerabilities. N/A Integration View the following items: Existing API tokens Existing integrations with automated backup systems such as Amazon Web Services (AWS) S3 Existing image registry integrations Existing integrations for notification systems like email, Jira, or webhooks Modify the following items: API tokens (create new tokens or revoke existing tokens) The configurations of backup integrations Image registry integrations (create/edit/delete) Notification integrations (create/edit/delete) K8sRole View roles for Kubernetes RBAC in secured clusters. N/A K8sRoleBinding View role bindings for Kubernetes RBAC in secured clusters. N/A K8sSubject View users and groups for Kubernetes RBAC in secured clusters. N/A Namespace View existing Kubernetes namespaces in secured clusters. N/A NetworkGraph View active and allowed network connections in secured clusters. N/A NetworkPolicy View existing network policies in secured clusters and simulate changes. Apply network policy changes in secured clusters. Node View existing Kubernetes nodes in secured clusters. N/A WorkflowAdministration View all resource collections. Add, modify, or delete resource collections. Role View existing Red Hat Advanced Cluster Security for Kubernetes RBAC roles and their permissions. Add, modify, or delete roles and their permissions. Secret View metadata about secrets in secured clusters. N/A ServiceAccount List Kubernetes service accounts in secured clusters. N/A 18.1.5. Declarative configuration for authentication and authorization resources You can use declarative configuration for authentication and authorization resources such as authentication providers, roles, permission sets, and access scopes. For instructions on how to use declarative configuration, see "Using declarative configuration" in the "Additional resources" section. Additional resources Using declarative configuration 18.2. Enabling PKI authentication If you use an enterprise certificate authority (CA) for authentication, you can configure Red Hat Advanced Cluster Security for Kubernetes (RHACS) to authenticate users by using their personal certificates. After you configure PKI authentication, users and API clients can log in using their personal certificates. Users without certificates can still use other authentication options, including API tokens, the local administrator password, or other authentication providers. PKI authentication is available on the same port number as the Web UI, gRPC, and REST APIs. When you configure PKI authentication, by default, Red Hat Advanced Cluster Security for Kubernetes uses the same port for PKI, web UI, gRPC, other single sign-on (SSO) providers, and REST APIs. You can also configure a separate port for PKI authentication by using a YAML configuration file to configure and expose endpoints. 18.2.1. Configuring PKI authentication by using the RHACS portal You can configure Public Key Infrastructure (PKI) authentication by using the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create Auth Provider and select User Certificates from the drop-down list. In the Name field, specify a name for this authentication provider. In the CA certificate(s) (PEM) field, paste your root CA certificate in PEM format. Assign a Minimum access role for users who access RHACS using PKI authentication. A user must have the permissions granted to this role or a role with higher permissions to log in to RHACS. Tip For security, Red Hat recommends first setting the Minimum access role to None while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section. For example, to give the Admin role to a user called administrator , you can use the following key-value pairs to create access rules: Key Value Name administrator Role Admin Click Save . 18.2.2. Configuring PKI authentication by using the roxctl CLI You can configure PKI authentication by using the roxctl CLI. Procedure Run the following command: USD roxctl -e <hostname>:<port_number> central userpki create -c <ca_certificate_file> -r <default_role_name> <provider_name> 18.2.3. Updating authentication keys and certificates You can update your authentication keys and certificates by using the RHACS portal. Procedure Create a new authentication provider. Copy the role mappings from your old authentication provider to the new authentication provider. Rename or delete the old authentication provider with the old root CA key. 18.2.4. Logging in by using a client certificate After you configure PKI authentication, users see a certificate prompt in the RHACS portal login page. The prompt only shows up if a client certificate trusted by the configured root CA is installed on the user's system. Use the procedure described in this section to log in by using a client certificate. Procedure Open the RHACS portal. Select a certificate in the browser prompt. On the login page, select the authentication provider name option to log in with a certificate. If you do not want to log in by using the certificate, you can also log in by using the administrator password or another login method. Note Once you use a client certificate to log into the RHACS portal, you cannot log in with a different certificate unless you restart your browser. 18.3. Understanding authentication providers An authentication provider connects to a third-party source of user identity (for example, an identity provider or IDP), gets the user identity, issues a token based on that identity, and returns the token to Red Hat Advanced Cluster Security for Kubernetes (RHACS). This token allows RHACS to authorize the user. RHACS uses the token within the user interface and API calls. After installing RHACS, you must set up your IDP to authorize users. Note If you are using OpenID Connect (OIDC) as your IDP, RHACS relies on mapping rules that examine the values of specific claims like groups , email , userid and name from either the user ID token or the UserInfo endpoint response to authorize the users. If these details are absent, the mapping cannot succeed and the user does not get access to the required resources. Therefore, you need to ensure that the claims required to authorize users from your IDP, for example, groups , are included in the authentication response of your IDP to enable successful mapping. Additional resources Configuring Okta Identity Cloud as a SAML 2.0 identity provider Configuring Google Workspace as an OIDC identity provider Configuring OpenShift Container Platform OAuth server as an identity provider Connecting Azure AD to RHACS using SSO configuration 18.3.1. Claim mappings A claim is the data an identity provider includes about a user inside the token they issue. Using claim mappings, you can specify if RHACS should customize the claim attribute it receives from an IDP to another attribute in the RHACS-issued token. If you do not use the claim mapping, RHACS does not include the claim attribute in the RHACS-issued token. For example, you can map from roles in the user identity to groups in the RHACS-issued token using claim mapping. RHACS uses different default claim mappings for every authentication provider. 18.3.1.1. OIDC default claim mappings The following list provides the default OIDC claim mappings: sub to userid name to name email to email groups to groups 18.3.1.2. Auth0 default claim mappings The Auth0 default claim mappings are the same as the OIDC default claim mappings. 18.3.1.3. SAML 2.0 default claim mappings The following list applies to SAML 2.0 default claim mappings: Subject.NameID is mapped to userid every SAML AttributeStatement.Attribute from the response gets mapped to its name 18.3.1.4. Google IAP default claim mappings The following list provides the Google IAP default claim mappings: sub to userid email to email hd to hd google.access_levels to access_levels 18.3.1.5. User certificates default claim mappings User certificates differ from all other authentication providers because instead of communicating with a third-party IDP, they get user information from certificates used by the user. The default claim mappings for user certificates include: CertFingerprint to userid Subject Common Name to name EmailAddresses to email Subject Organizational Unit to groups 18.3.1.6. OpenShift Auth default claim mappings The following list provides the OpenShift Auth default claim mappings: groups to groups uid to userid name to name 18.3.2. Rules To authorize users, RHACS relies on mapping rules that examine the values of specific claims such as groups , email , userid , and name from the user identity. Rules allow mapping of users who have attributes with a specific value to a specific role. As an example, a rule could include the following:`key` is email , value is [email protected] , role is Admin . If the claim is missing, the mapping cannot succeed, and the user does not get access to the required resources. Therefore, to enable successful mapping, you must ensure that the authentication response from your IDP includes the required claims to authorize users, for example, groups . 18.3.3. Minimum access role RHACS assigns a minimum access role to every caller with a RHACS token issued by a particular authentication provider. The minimum access role is set to None by default. For example, suppose there is an authentication provider with the minimum access role of Analyst . In that case, all users who log in using this provider will have the Analyst role assigned to them. 18.3.4. Required attributes Required attributes can restrict issuing of the RHACS token based on whether a user identity has an attribute with a specific value. For example, you can configure RHACS only to issue a token when the attribute with key is_internal has the attribute value true . Users with the attribute is_internal set to false or not set do not get a token. 18.4. Configuring identity providers 18.4.1. Configuring Okta Identity Cloud as a SAML 2.0 identity provider You can use Okta as a single sign-on (SSO) provider for Red Hat Advanced Cluster Security for Kubernetes (RHACS). 18.4.1.1. Creating an Okta app Before you can use Okta as a SAML 2.0 identity provider for Red Hat Advanced Cluster Security for Kubernetes, you must create an Okta app. Warning Okta's Developer Console does not support the creation of custom SAML 2.0 applications. If you are using the Developer Console , you must first switch to the Admin Console ( Classic UI ). To switch, click Developer Console in the top left of the page and select Classic UI . Prerequisites You must have an account with administrative privileges for the Okta portal. Procedure On the Okta portal, select Applications from the menu bar. Click Add Application and then select Create New App . In the Create a New Application Integration dialog box, leave Web as the platform and select SAML 2.0 as the protocol that you want to sign in users. Click Create . On the General Settings page, enter a name for the app in the App name field. Click . On the SAML Settings page, set values for the following fields: Single sign on URL Specify it as https://<RHACS_portal_hostname>/sso/providers/saml/acs . Leave the Use this for Recipient URL and Destination URL option checked. If your RHACS portal is accessible at different URLs, you can add them here by checking the Allow this app to request other SSO URLs option and add the alternative URLs using the specified format. Audience URI (SP Entity ID) Set the value to RHACS or another value of your choice. Remember the value you choose; you will need this value when you configure Red Hat Advanced Cluster Security for Kubernetes. Attribute Statements You must add at least one attribute statement. Red Hat recommends using the email attribute: Name: email Format: Unspecified Value: user.email Verify that you have configured at least one Attribute Statement before continuing. Click . On the Feedback page, select an option that applies to you. Select an appropriate App type . Click Finish . After the configuration is complete, you are redirected to the Sign On settings page for the new app. A yellow box contains links to the information that you need to configure Red Hat Advanced Cluster Security for Kubernetes. After you have created the app, assign Okta users to this application. Go to the Assignments tab, and assign the set of individual users or groups that can access Red Hat Advanced Cluster Security for Kubernetes. For example, assign the group Everyone to allow all users in the organization to access Red Hat Advanced Cluster Security for Kubernetes. 18.4.1.2. Configuring a SAML 2.0 identity provider Use the instructions in this section to integrate a Security Assertion Markup Language (SAML) 2.0 identity provider with Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have permissions to configure identity providers in RHACS. For Okta identity providers, you must have an Okta app configured for RHACS. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select SAML 2.0 from the drop-down list. In the Name field, enter a name to identify this authentication provider; for example, Okta or Google . The integration name is shown on the login page to help users select the correct sign-in option. In the ServiceProvider issuer field, enter the value that you are using as the Audience URI or SP Entity ID in Okta, or a similar value in other providers. Select the type of Configuration : Option 1: Dynamic Configuration : If you select this option, enter the IdP Metadata URL , or the URL of Identity Provider metadata available from your identity provider console. The configuration values are acquired from the URL. Option 2: Static Configuration : Copy the required static fields from the View Setup Instructions link in the Okta console, or a similar location for other providers: IdP Issuer IdP SSO URL Name/ID Format IdP Certificate(s) (PEM) Assign a Minimum access role for users who access RHACS using SAML. Tip Set the Minimum access role to Admin while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. Click Save . Important If your SAML identity provider's authentication response meets the following criteria: Includes a NotValidAfter assertion: The user session remains valid until the time specified in the NotValidAfter field has elapsed. After the user session expires, users must reauthenticate. Does not include a NotValidAfter assertion: The user session remains valid for 30 days, and then users must reauthenticate. Verification In the RHACS portal, go to Platform Configuration Access Control . Select the Auth Providers tab. Click the authentication provider for which you want to verify the configuration. Select Test login from the Auth Provider section header. The Test login page opens in a new browser tab. Sign in with your credentials. If you logged in successfully, RHACS shows the User ID and User Attributes that the identity provider sent for the credentials that you used to log in to the system. If your login attempt failed, RHACS shows a message describing why the identity provider's response could not be processed. Close the Test login browser tab. Note Even if the response indicates successful authentication, you might need to create additional access rules based on the user metadata from your identity provider. 18.4.2. Configuring Google Workspace as an OIDC identity provider You can use Google Workspace as a single sign-on (SSO) provider for Red Hat Advanced Cluster Security for Kubernetes. 18.4.2.1. Setting up OAuth 2.0 credentials for your GCP project To configure Google Workspace as an identity provider for Red Hat Advanced Cluster Security for Kubernetes, you must first configure OAuth 2.0 credentials for your GCP project. Prerequisites You must have administrator-level access to your organization's Google Workspace account to create a new project, or permissions to create and configure OAuth 2.0 credentials for an existing project. Red Hat recommends that you create a new project for managing access to Red Hat Advanced Cluster Security for Kubernetes. Procedure Create a new Google Cloud Platform (GCP) project, see the Google documentation topic creating and managing projects . After you have created the project, open the Credentials page in the Google API Console. Verify the project name listed in the upper left corner near the logo to make sure that you are using the correct project. To create new credentials, go to Create Credentials OAuth client ID . Choose Web application as the Application type . In the Name box, enter a name for the application, for example, RHACS . In the Authorized redirect URIs box, enter https://<stackrox_hostname>:<port_number>/sso/providers/oidc/callback . replace <stackrox_hostname> with the hostname on which you expose your Central instance. replace <port_number> with the port number on which you expose Central. If you are using the standard HTTPS port 443 , you can omit the port number. Click Create . This creates an application and credentials and redirects you back to the credentials page. An information box opens, showing details about the newly created application. Close the information box. Copy and save the Client ID that ends with .apps.googleusercontent.com . You can check this client ID by using the Google API Console. Select OAuth consent screen from the navigation menu on the left. Note The OAuth consent screen configuration is valid for the entire GCP project, and not only to the application you created in the steps. If you already have an OAuth consent screen configured in this project and want to apply different settings for Red Hat Advanced Cluster Security for Kubernetes login, create a new GCP project. On the OAuth consent screen page: Choose the Application type as Internal . If you select Public , anyone with a Google account can sign in. Enter a descriptive Application name . This name is shown to users on the consent screen when they sign in. For example, use RHACS or <organization_name> SSO for Red Hat Advanced Cluster Security for Kubernetes . Verify that the Scopes for Google APIs only lists email , profile , and openid scopes. Only these scopes are required for single sign-on. If you grant additional scopes it increases the risk of exposing sensitive data. 18.4.2.2. Specifying a client secret Red Hat Advanced Cluster Security for Kubernetes version 3.0.39 and newer supports the OAuth 2.0 Authorization Code Grant authentication flow when you specify a client secret. When you use this authentication flow, Red Hat Advanced Cluster Security for Kubernetes uses a refresh token to keep users logged in beyond the token expiration time configured in your OIDC identity provider. When users log out, Red Hat Advanced Cluster Security for Kubernetes deletes the refresh token from the client-side. Additionally, if your identity provider API supports refresh token revocation, Red Hat Advanced Cluster Security for Kubernetes also sends a request to your identity provider to revoke the refresh token. You can specify a client secret when you configure Red Hat Advanced Cluster Security for Kubernetes to integrate with an OIDC identity provider. Note You cannot use a Client Secret with the Fragment Callback mode . You cannot edit configurations for existing authentication providers. You must create a new OIDC integration in Red Hat Advanced Cluster Security for Kubernetes if you want to use a Client Secret . Red Hat recommends using a client secret when connecting Red Hat Advanced Cluster Security for Kubernetes with an OIDC identity provider. If you do not want to use a Client Secret , you must select the Do not use Client Secret (not recommended) option. 18.4.2.3. Configuring an OIDC identity provider You can configure Red Hat Advanced Cluster Security for Kubernetes (RHACS) to use your OpenID Connect (OIDC) identity provider. Prerequisites You must have already configured an application in your identity provider, such as Google Workspace. You must have permissions to configure identity providers in RHACS. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select OpenID Connect from the drop-down list. Enter information in the following fields: Name : A name to identify your authentication provider; for example, Google Workspace . The integration name is shown on the login page to help users select the correct sign-in option. Callback mode : Select Auto-select (recommended) , which is the default value, unless the identity provider requires another mode. Note Fragment mode is designed around the limitations of Single Page Applications (SPAs). Red Hat only supports the Fragment mode for early integrations and does not recommended using it for later integrations. Issuer : The root URL of your identity provider; for example, https://accounts.google.com for Google Workspace. See your identity provider documentation for more information. Note If you are using RHACS version 3.0.49 and later, for Issuer you can perform these actions: Prefix your root URL with https+insecure:// to skip TLS validation. This configuration is insecure and Red Hat does not recommended it. Only use it for testing purposes. Specify query strings; for example, ?key1=value1&key2=value2 along with the root URL. RHACS appends the value of Issuer as you entered it to the authorization endpoint. You can use it to customize your provider's login screen. For example, you can optimize the Google Workspace login screen to a specific hosted domain by using the hd parameter , or preselect an authentication method in PingFederate by using the pfidpadapterid parameter . Client ID : The OIDC Client ID for your configured project. Client Secret : Enter the client secret provided by your identity provider (IdP). If you are not using a client secret, which is not recommended, select Do not use Client Secret . Assign a Minimum access role for users who access RHACS using the selected identity provider. Tip Set the Minimum access role to Admin while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section. For example, to give the Admin role to a user called administrator , you can use the following key-value pairs to create access rules: Key Value Name administrator Role Admin Click Save . Verification In the RHACS portal, go to Platform Configuration Access Control . Select the Auth providers tab. Select the authentication provider for which you want to verify the configuration. Select Test login from the Auth Provider section header. The Test login page opens in a new browser tab. Log in with your credentials. If you logged in successfully, RHACS shows the User ID and User Attributes that the identity provider sent for the credentials that you used to log in to the system. If your login attempt failed, RHACS shows a message describing why the identity provider's response could not be processed. Close the Test Login browser tab. 18.4.3. Configuring OpenShift Container Platform OAuth server as an identity provider OpenShift Container Platform includes a built-in OAuth server that you can use as an authentication provider for Red Hat Advanced Cluster Security for Kubernetes (RHACS). 18.4.3.1. Configuring OpenShift Container Platform OAuth server as an identity provider To integrate the built-in OpenShift Container Platform OAuth server as an identity provider for RHACS, use the instructions in this section. Prerequisites You must have the Access permission to configure identity providers in RHACS. You must have already configured users and groups in OpenShift Container Platform OAuth server through an identity provider. For information about the identity provider requirements, see Understanding identity provider configuration . Note The following procedure configures only a single main route named central for the OpenShift Container Platform OAuth server. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select OpenShift Auth from the drop-down list. Enter a name for the authentication provider in the Name field. Assign a Minimum access role for users that access RHACS using the selected identity provider. A user must have the permissions granted to this role or a role with higher permissions to log in to RHACS. Tip For security, Red Hat recommends first setting the Minimum access role to None while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. Optional: To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section, then enter the rule information and click Save . You will need attributes for the user or group so that you can configure access. Tip Group mappings are more robust because groups are usually associated with teams or permissions sets and require modification less often than users. To get user information in OpenShift Container Platform, you can use one of the following methods: Click User Management Users <username > YAML . Access the k8s/cluster/user.openshift.io~v1~User/<username>/yaml file and note the values for name , uid ( userid in RHACS), and groups . Use the OpenShift Container Platform API as described in the OpenShift Container Platform API reference . The following configuration example describes how to configure rules for an Admin role with the following attributes: name : administrator groups : ["system:authenticated", "system:authenticated:oauth", "myAdministratorsGroup"] uid : 12345-00aa-1234-123b-123fcdef1234 You can add a rule for this administrator role using one of the following steps: To configure a rule for a name, select name from the Key drop-down list, enter administrator in the Value field, then select Administrator under Role . To configure a rule for a group, select groups from the Key drop-down list, enter myAdministratorsGroup in the Value field, then select Admin under Role . To configure a rule for a user name, select userid from the Key drop-down list, enter 12345-00aa-1234-123b-123fcdef1234 in the Value field, then select Admin under Role . Important If you use a custom TLS certificate for OpenShift Container Platform OAuth server, you must add the root certificate of the CA to Red Hat Advanced Cluster Security for Kubernetes as a trusted root CA. Otherwise, Central cannot connect to the OpenShift Container Platform OAuth server. To enable the OpenShift Container Platform OAuth server integration when installing Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI, set the ROX_ENABLE_OPENSHIFT_AUTH environment variable to true in Central: USD oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true For access rules, the OpenShift Container Platform OAuth server does not return the key Email . Additional resources Configuring an LDAP identity provider Adding trusted certificate authorities 18.4.3.2. Creating additional routes for OpenShift Container Platform OAuth server When you configure OpenShift Container Platform OAuth server as an identity provider by using Red Hat Advanced Cluster Security for Kubernetes portal, RHACS configures only a single route for the OAuth server. However, you can create additional routes by specifying them as annotations in the Central custom resource. Prerequisites You must have configured Service accounts as OAuth clients for your OpenShift Container Platform OAuth server. Procedure If you installed RHACS using the RHACS Operator: Create a CENTRAL_ADDITIONAL_ROUTES environment variable that contains a patch for the Central custom resource: USD CENTRAL_ADDITIONAL_ROUTES=' spec: central: exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true persistence: persistentVolumeClaim: claimName: stackrox-db customize: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"central\"}}" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"second-central\"}}" 4 ' 1 The redirect URI for setting the main route. 2 The redirect URI reference for the main route. 3 The redirect for setting the second route. 4 The redirect reference for the second route. Apply the CENTRAL_ADDITIONAL_ROUTES patch to the Central custom resource: USD oc patch centrals.platform.stackrox.io \ -n <namespace> \ 1 <custom-resource> \ 2 --patch "USDCENTRAL_ADDITIONAL_ROUTES" \ --type=merge 1 Replace <namespace> with the name of the project that contains the Central custom resource. 2 Replace <custom-resource> with the name of the Central custom resource. Or, if you installed RHACS using Helm: Add the following annotations to your values-public.yaml file: customize: central: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"central\"}}" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"second-central\"}}" 4 1 The redirect for setting the main route. 2 The redirect reference for the main route. 3 The redirect for setting the second route. 4 The redirect reference for the second route. Apply the custom annotations to the Central custom resource by using helm upgrade : USD helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> 1 1 Specify the path of the values-public.yaml configuration file using the -f option. Additional resources Service accounts as OAuth clients Redirect URIs for service accounts as OAuth clients 18.4.4. Connecting Azure AD to RHACS using SSO configuration To connect an Azure Active Directory (AD) to RHACS using Sign-On (SSO) configuration, you need to add specific claims (for example, group claim to tokens) and assign users, groups, or both to the enterprise application. 18.4.4.1. Adding group claims to tokens for SAML applications using SSO configuration Configure the application registration in Azure AD to include group claims in tokens. For instructions, see Add group claims to tokens for SAML applications using SSO configuration . Important Verify that you are using the latest version of Azure AD. For more information on how to upgrade Azure AD to the latest version, see Azure AD Connect: Upgrade from a version to the latest . 18.5. Removing the admin user Red Hat Advanced Cluster Security for Kubernetes (RHACS) creates an administrator account, admin , during the installation process that can be used to log in with a user name and password. The password is dynamically generated unless specifically overridden and is unique to your RHACS instance. In production environments, it is highly recommended to create an authentication provider and remove the admin user. 18.5.1. Removing the admin user after installation After an authentication provider has been successfully created, it is strongly recommended to remove the admin user. Removing the admin user is dependent on the installation method of the RHACS portal. Procedure Perform one of the following procedures: For Operator installations, set central.adminPasswordGenerationDisabled to true in your Central custom resource. For Helm installations: In your Central Helm configuration, set central.adminPassword.generate to false . Follow the steps to change the configuration. See "Changing configuration options after deployment" for more information. For roxctl installations: When generating the manifest, set Disable password generation to false . Follow the steps to install Central by using roxctl to apply the changes. See "Install Central using the roxctl CLI" for more information. Additional resources Changing configuration options after deploying the central-services Helm chart (OpenShift Container Platform) Changing configuration options after deploying the central-services Helm chart (Kubernetes) Install Central using the roxctl CLI After applying the configuration changes, you cannot log in as an admin user. Note You can add the admin user again as a fallback by reverting the configuration changes. When enabling the admin user again, a new password is generated. 18.6. Configuring short-lived access Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides the ability to configure short-lived access to the user interface and API calls. You can configure this by exchanging OpenID Connect (OIDC) identity tokens for a RHACS-issued token. We recommend this especially for Continuous Integration (CI) usage, where short-lived access is preferable over long-lived API tokens. The following steps outline the high-level workflow on how to configure short-lived access to the user interface and API calls: Configuring RHACS to trust OIDC identity token issuers for exchanging short-lived RHACS-issued tokens. Exchanging an OIDC identity token for a short-lived RHACS-issued token by calling the API. 18.6.1. Configure short-lived access for an OIDC identity token issuer Start configuring short-lived access for an OpenID Connect (OIDC) identity token issuer. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll to the Authentication Tokens category, and then click Machine access configuration . Click Create configuration . Select the configuration type , choosing one of the following: Generic if you use an arbitrary OIDC identity token issuer. GitHub Actions if you plan to access RHACS from GitHub Actions. Enter the OIDC identity token issuer. Enter the token lifetime for tokens issued by the configuration. Note The format for the token lifetime is XhYmZs and cannot be set longer than 24 hours. Add rules to the configuration: The Key is the OIDC token's claim to use. The Value is the expected OIDC token claim value. The Role is the role to assign to the token if the OIDC token claim and value exist. Note Rules are similar to Authentication Provider rules to assign roles based on claim values. As a general rule, Red Hat recommends to use unique, immutable claims within Rules. The general recommendation is to use the sub claim within the OIDC identity token. For more information about OIDC token claims, see the list of standard OIDC claims . Click Save . 18.6.2. Exchanging an identity token Prerequisites You have a valid OpenID Connect (OIDC) token. You added a Machine access configuration for the RHACS instance you want to access. Procedure Prepare the POST request's JSON data: { "idToken": "<id_token>" } Send a POST request to the API /v1/auth/m2m/exchange . Wait for the API response: { "accessToken": "<access_token>" } Use the returned access token to access the RHACS instance. Note If you are using GitHub Actions , you can use the stackrox/central-login GitHub Action . 18.7. Understanding multi-tenancy Red Hat Advanced Cluster Security for Kubernetes provides ways to implement multi-tenancy within a Central instance. You can implement multi-tenancy by using role-based access control (RBAC) and access scopes within RHACS. 18.7.1. Understanding resource scoping RHACS includes resources which are used within RBAC. In addition to associating permissions for a resource, each resource is also scoped. In RHACS, resources are scoped as the following types: Global scope, where a resource is not assigned to any cluster or namespace Cluster scope, where a resource is assigned to particular clusters Namespace scope, where a resource is assigned to particular namespaces The scope of resources is important when creating custom access scopes. Custom access scopes are used to create multi-tenancy within RHACS. Only resources which are cluster or namespace scoped are applicable for scoping in access scopes. Globally scoped resources are not scoped by access scopes. Therefore, multi-tenancy within RHACS can only be achieved for resources that are scoped either by cluster or namespace. 18.7.2. Multi-tenancy per namespace configuration example A common example for multi-tenancy within RHACS is associating users with a specific namespace and only allowing them access to their specific namespace. The following example combines a custom permission set, access scope, and role. The user or group assigned with this role can only see CVE information, violations, and information about deployments in the particular namespace or cluster scoped to them. Procedure In the RHACS portal, select Platform Configuration Access Control . Select Permission Sets . Click Create permission set . Enter a Name and Description for the permission set. Select the following resources and access level and click Save : READ Alert READ Deployment READ DeploymentExtension READ Image READ K8sRole READ K8sRoleBinding READ K8sSubject READ NetworkGraph READ NetworkPolicy READ Secret READ ServiceAccount Select Access Scopes . Click Create access scope . Enter a Name and Description for the access scope. In the Allowed resources section, select the namespace you want to use for scoping and click Save . Select Roles . Click Create role . Enter a Name and Description for the role. Select the previously created Permission Set and Access scope for the role and click Save . Assign the role to your required user or group. See Assigning a role to a user or a group . Note The RHACS dashboard options for users with the sample role are minimal compared to options available to an administrator. Only relevant pages are visible for the user. 18.7.3. Limitations Achieving multi-tenancy within RHACS is not possible for resources with a global scope . The following resources have a global scope: Access Administration Detection Integration VulnerabilityManagementApprovals VulnerabilityManagementRequests WatchedImage WorkflowAdministration These resources are shared across all users within a RHACS Central instance and cannot be scoped. Additional resources Creating a custom permission set Create a custom access scope Create a custom role
|
[
"roxctl -e <hostname>:<port_number> central userpki create -c <ca_certificate_file> -r <default_role_name> <provider_name>",
"oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true",
"CENTRAL_ADDITIONAL_ROUTES=' spec: central: exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true persistence: persistentVolumeClaim: claimName: stackrox-db customize: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4 '",
"oc patch centrals.platform.stackrox.io -n <namespace> \\ 1 <custom-resource> \\ 2 --patch \"USDCENTRAL_ADDITIONAL_ROUTES\" --type=merge",
"customize: central: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4",
"helm upgrade -n stackrox stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> 1",
"{ \"idToken\": \"<id_token>\" }",
"{ \"accessToken\": \"<access_token>\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/managing-user-access
|
9.4.2. autofs
|
9.4.2. autofs One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to a dozen systems at one time, overall system performance can suffer. An alternative to /etc/fstab is to use the kernel-based automount utility, which can mount and unmount NFS file systems automatically, saving resources. The autofs service is used to control the automount command through the /etc/auto.master primary configuration file. While automount can be specified on the command line, it is more convenient to specify the mount points, hostname, exported directory, and options in a set of files rather than typing them manually. The autofs configuration files are arranged in a parent-child relationship. The main configuration file ( /etc/auto.master ) lists mount points on the system that are linked to a particular map type , which takes the form of other configuration files, programs, NIS maps, and other less common mount methods. The auto.master file contains lines referring to each of these mount points, organized in the following manner: The <mount-point> element specifies the location of the mount on the local file system. The <map-type> specifies how the mount point is mounted. The most common method for auto mounting NFS exports is to use a file as the map type for the particular mount point. The map file is usually named auto. <mount-point> , where <mount-point> is the mount point designated in auto.master . A line within map files to mount an NFS export looks like the following example: Replace </local/directory;> with the local file system on which the exported directory is mounted. This mount point must exist before the map file is read, else the mount fails. Replace <options> with a comma separated list of options for the NFS file system (refer to Section 9.4.3, "Common NFS Mount Options" for details). Be sure to include the hyphen character ( - ) immediately before the options list. Replace <server> with the hostname, IP address, or fully qualified domain name of the server exporting the file system. Replace </remote/export> with the path to the exported directory. Replace <options> with a comma separated list of options for the NFS file system (refer to Section 9.4.3, "Common NFS Mount Options" for details). While autofs configuration files can be used for a variety of mounts to many types of devices and file systems, they are particularly useful in creating NFS mounts. For example, some organizations store a user's /home/ directory on a central server via an NFS share, then configure the auto.master file on each of the workstations to point to an auto.home file containing the specifics for how to mount the /home/ directory via NFS. This allows the user to access personal data and configuration files in their /home/ directory by logging in anywhere on the network. The auto.master file in this situation would look similar to this: This sets up the /home/ mount point on the local system to be configured by the /etc/auto.home file, which looks similar to the example below: This line states that any directory a user tries to access under the local /home/ directory (due to the asterisk character) should result in an NFS mount on the server.example.com system on the mount point /home/ . The mount options specify that each /home/ directory NFS mounts should use a particular collection of settings. For more information on mount options, including the ones used in this example, refer to Section 9.4.3, "Common NFS Mount Options" . For more information about the autofs configuration files, refer to the auto.master man page.
|
[
"<mount-point> <map-type>",
"</local/directory> - <options> <server> : </remote/export>",
"/home /etc/auto.home",
"* -fstype=nfs4,soft,intr,rsize=32768,wsize=32768,nosuid server.example.com:/home"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-nfs-client-config-autofs
|
Part I. Preface
|
Part I. Preface
| null |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-preface
|
23.2.4. DHCP Relay Agent
|
23.2.4. DHCP Relay Agent The DHCP Relay Agent ( dhcrelay ) allows for the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified when the DHCP Relay Agent is started. When a DHCP server returns a reply, the reply is broadcast or unicast on the network that sent the original request. The DHCP Relay Agent listens for DHCP requests on all interfaces unless the interfaces are specified in /etc/sysconfig/dhcrelay with the INTERFACES directive. To start the DHCP Relay Agent, use the command service dhcrelay start .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_a_DHCP_Server-DHCP_Relay_Agent
|
5.4.11. Renaming Logical Volumes
|
5.4.11. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . Renaming the root logical volume requires additional reconfiguration. For information on renaming a root volume, see How to rename root volume group or logical volume in Red Hat Enterprise Linux . For more information on activating logical volumes on individual nodes in a cluster, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" .
|
[
"lvrename /dev/vg02/lvold /dev/vg02/lvnew",
"lvrename vg02 lvold lvnew"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LV_rename
|
Chapter 1. Developing and compiling your Red Hat build of Quarkus applications with Apache Maven
|
Chapter 1. Developing and compiling your Red Hat build of Quarkus applications with Apache Maven As an application developer, you can use Red Hat build of Quarkus to create microservices-based applications written in Java that run on OpenShift Container Platform and serverless environments. Applications compiled to native executables have small memory footprints and fast startup times. Use the Quarkus Apache Maven plugin to create a Red Hat build of Quarkus project. Note Where applicable, alternative instructions for using the Quarkus command-line interface (CLI) are provided. The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Prerequisites You have installed OpenJDK 11 or 17. To download Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have set the JAVA_HOME environment variable to specify the location of the Java SDK. You have installed Apache Maven 3.8.6 or later. To download Maven, go to the Apache Maven Project website. 1.1. About Red Hat build of Quarkus Red Hat build of Quarkus is a Kubernetes-native Java stack optimized for containers and Red Hat OpenShift Container Platform. Quarkus is designed to work with popular Java standards, frameworks, and libraries such as Eclipse MicroProfile, Eclipse Vert.x, Apache Camel, Apache Kafka, Hibernate ORM with Jakarta Persistence, and RESTEasy Reactive (Jakarta REST). As a developer, you can choose the Java frameworks you want for your Java applications, which you can run in Java Virtual Machine (JVM) mode or compile and run in native mode. Quarkus provides a container-first approach to building Java applications. The container-first approach facilitates the containerization and efficient execution of microservices and functions. For this reason, Quarkus applications have a smaller memory footprint and faster startup times. Quarkus also optimizes the application development process with capabilities such as unified configuration, automatic provisioning of unconfigured services, live coding, and continuous testing that gives you instant feedback on your code changes. For information about the differences between the Quarkus community version and Red Hat build of Quarkus, see Differences between the Red Hat build of Quarkus community version and Red Hat build of Quarkus . 1.2. About Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool that is used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output by using an XML file, ensuring that the project gets built correctly and uniformly. Maven repositories A Maven repository stores Java libraries, plugins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the Red Hat-hosted Maven repository with your Quarkus projects, or you can download the Red Hat build of Quarkus Maven repository. Maven plugins Maven plugins are defined parts of a POM file that run one or more tasks. Red Hat build of Quarkus applications use the following Maven plugins: Quarkus Maven plugin ( quarkus-maven-plugin ) : Enables Maven to create Quarkus projects, packages your applications into JAR files, and provides a dev mode. Maven Surefire plugin ( maven-surefire-plugin ) : When Quarkus enables the test profile, the Maven Surefire plugin is used during the test phase of the build lifecycle to run unit tests on your application. The plugin generates text and XML files that contain the test reports. Additional resources Configuring your Red Hat build of Quarkus applications 1.2.1. Configuring the Maven settings.xml file for the online repository To use the Red Hat-hosted Quarkus repository with your Quarkus Maven project, configure the settings.xml file for your user. Maven settings that are used with a repository manager or a repository on a shared server offer better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. If you want to apply the configuration to a specific project only, use the -s option and specify the path to the project-specific settings.xml file. Procedure Open the Maven USDHOME/.m2/settings.xml file in a text editor or an integrated development environment (IDE). Note If no settings.xml file is present in the USDHOME/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/conf/ directory into the USDHOME/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Red Hat build of Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 1.3. Creating a Red Hat build of Quarkus project on the command line Use the Red Hat build of Quarkus Maven plugin on the command line to create a Quarkus project by providing attributes and values on the command line or by using the plugin in interactive mode. You can also create a Quarkus project by using the Quarkus command-line interface (CLI). The resulting project includes the following elements: The Maven structure An associated unit test A landing page that is accessible on http://localhost:8080 after you start the application Example Dockerfile files for JVM and native mode in src/main/docker The application configuration file Prerequisites You have installed OpenJDK 11 or 17. To download Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have set the JAVA_HOME environment variable to specify the location of the Java SDK. You have installed Apache Maven 3.8.6 or later. To download Maven, go to the Apache Maven Project website. You have installed the Quarkus command-line interface (CLI), which is one of the methods you can use to create a Quarkus project. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure In a command terminal, enter the following command to verify that Maven is using OpenJDK 11 or 17 and that the Maven version is 3.8.6 or later: mvn --version If the preceding command does not return OpenJDK 11 or 17, add the path to OpenJDK 11 or 17 to the PATH environment variable and enter the preceding command again. To use the Quarkus Maven plugin to create a project, use one of the following methods: Enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create \ -DprojectGroupId=<project_group_id> \ -DprojectArtifactId=<project_artifact_id> \ -DplatformGroupId=com.redhat.quarkus.platform \ -DplatformArtifactId=quarkus-bom \ -DplatformVersion=3.2.12.SP1-redhat-00003 -DpackageName=getting.started In this command, replace the following values: <project_group_id> : A unique identifier of your project <project_artifact_id> : The name of your project and your project directory Create the project in interactive mode: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create When prompted, enter the required attribute values. Note You can also create your project by using the default values for the project attributes by entering the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -B Create the project by using the Red Hat build of Quarkus CLI: quarkus create app my-groupId:my-artifactId --package-name=getting.started You can also get the list of available options with: quarkus create app --help The following table lists the attributes that you can define with the create command: Attribute Default Value Description projectGroupId org.acme A unique identifier of your project. projectArtifactId code-with-quarkus The name of your project and your project directory. If you do not specify the projectArtifactId attribute, the Maven plugin starts the interactive mode. If the directory already exists, the generation fails. projectVersion 1.0-SNAPSHOT The version of your project. platformGroupId com.redhat.quarkus.platform The group ID of your platform. All the existing platforms are provided by com.redhat.quarkus.platform . However, you can change the default value. platformArtifactId quarkus-bom The artifact ID of your platform BOM. platformVersion The latest platform version, for example, 3.2.12.SP1-redhat-00003 . The version of the platform you want to use for your project. When you provide a version range, the Maven plugin uses the latest version. packageName [] The name of the getting started package, getting.started . extensions [] The list of extensions you want to add to your project, separated by a comma. Note By default, the Quarkus Maven plugin uses the latest quarkus-bom file. The quarkus-bom file aggregates extensions so that you can reference them from your applications to align the dependency versions. When you are offline, the Quarkus Maven plugin uses the latest locally available version of the quarkus-bom file. If Maven finds the quarkus-bom version 2.0 or earlier, it uses the platform based on the quarkus-bom . 1.4. Creating a Red Hat build of Quarkus project by configuring the pom.xml file You can create a Quarkus project by configuring the Maven pom.xml file. Procedure Open the pom.xml file in a text editor. Add the configuration properties that contain the following items: The Maven Compiler Plugin version The Quarkus BOM groupID , artifactID , and version The Maven Surefire Plugin version The skipITs property. <properties> <compiler-plugin.version>3.11.0</compiler-plugin.version> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.2.11.Final-redhat-00001</quarkus.platform.version> <surefire-plugin.version>3.1.2</surefire-plugin.version> <skipITs>true</skipITs> </properties> Add the Quarkus GAV (group, artifact, version) and use the quarkus-bom file to omit the versions of the different Quarkus dependencies: <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Add the Quarkus Maven plugin, the Maven Compiler plugin, and the Maven Surefire plugin: <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> Note The maven-surefire-plugin runs the unit tests for your application. Optional: To build a native application, add a specific native profile that includes the Maven Failsafe Plugin : <build> <plugins> ... <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner </native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> ... <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> Tests that include IT in their names and contain the @NativeImageTest annotation are run against the native executable. For more details about how native mode differs from JVM mode, see Difference between JVM and native mode in the Quarkus "Getting Started" guide. 1.5. Creating the Getting Started project by using code.quarkus.redhat.com As an application developer, you can use code.quarkus.redhat.com to generate a Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters that are required to compile your project into a native executable. You can generate a Quarkus Maven project, including the following activities: Specifying basic details about your application Choosing the extensions that you want to include in your project Generating a downloadable archive with your project files Using custom commands for compiling and starting your application Prerequisites You have a web browser. You have prepared your environment to use Apache Maven. For more information, see Preparing your environment . You have configured your Quarkus Maven repository. To create a Quarkus application with Maven, use the Red Hat-hosted Quarkus repository. For more information, see Configuring the Maven settings.xml file for the online repository . Optional : You have installed the Quarkus command-line interface (CLI), which is one of the methods you can use to start Quarkus in dev mode. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure On your web browser, navigate to https://code.quarkus.redhat.com . Specify basic details about your project: Enter a group name for your project. The name format follows the Java package naming convention; for example, org.acme . Enter a name for the Maven artifacts generated by your project, such as code-with-quarkus . Select the build tool you want to use to compile and start your application. The build tool that you choose determines the following setups: The directory structure of your generated project The format of configuration files that are used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create Quarkus Maven projects only. Specify additional details about your application project: To display the fields that contain further application details, select More options . Enter a version you want to use for artifacts generated by your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended; however, you can choose to specify a different type of versioning. Select whether you want code.quarkus.redhat.com to add starter code to your project. When you add extensions that are marked with " STARTER-CODE " to your project, you can enable this option to automatically create example class and resource files for those extensions when you generate your project. However, this option does not affect your generated project if you do not add any extensions that provide an example code. Note The code.quarkus.redhat.com application automatically uses the latest release of Red Hat build of Quarkus. However, should you require, it is possible to manually change to an earlier BOM version in the pom.xml file after you generate your project, but this is not recommended. Select the extensions that you want to use. The extensions you select are included as dependencies of your Quarkus application. The Quarkus platform also ensures these extensions are compatible with future versions. Important Do not use the RESTEasy and the RESTEasy Reactive extensions in the same project. The quark icon ( ) to an extension indicates that the extension is part of the Red Hat build of Quarkus platform release. Red Hat recommends using extensions from the same platform because they are tested and verified together and are therefore easier to use and upgrade. You can enable the option to automatically generate starter code for extensions marked with " STARTER-CODE ". To confirm your choices, select Generate your application . The following items are displayed: A link to download the archive that contains your generated project A custom command that you can use to compile and start your application To save the archive with the generated project files to your machine, select Download the ZIP . Extract the contents of the archive. Go to the directory that contains your extracted project files: cd <directory_name> To compile and start your application in dev mode, use one of the following ways: Using Maven: Using the Quarkus CLI: Additional resources Support levels for Red Hat build of Quarkus extensions 1.6. Configuring the Java compiler By default, the Quarkus Maven plugin passes compiler flags to javac command from maven-compiler-plugin . Procedure To customize the compiler flags used in development mode, add a configuration section to the plugin block and set the compilerArgs property. You can also set source , target , and jvmArgs . For example, to pass -verbose to the JVM and javac commands, add the following configuration: <plugin> <groupId>com.redhat.quarkus.platform</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <configuration> <source>USD{maven.compiler.source}</source> <target>USD{maven.compiler.target}</target> <compilerArgs> <arg>-verbose</arg> </compilerArgs> <jvmArgs>-verbose</jvmArgs> </configuration> ... </plugin> 1.7. Installing and managing extensions In Red Hat build of Quarkus, you can use extensions to expand your application's functionality and configure, boot, and integrate a framework into your application. This procedure shows you how to find and add extensions to your Quarkus project. Prerequisites You have created a Quarkus Maven project. You have installed the Quarkus command-line interface (CLI), which is one of the methods you can use to manage your Quarkus extensions. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure Navigate to your Quarkus project directory. List all of the available extensions by using one of the following ways: Using Maven: ./mvnw quarkus:list-extensions Using the Quarkus CLI: quarkus extension --installable Add an extension to your project by using one of the following ways: Using Maven, enter the following command where <extension> is the group, artifact, and version (GAV) of the extension that you want to add: ./mvnw quarkus:add-extension -Dextensions="<extension>" For example, to add the Agroal extension, enter the following command: ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-agroal" Using the Quarkus CLI, enter the following command where <extension> is the group, artifact, and version (GAV) of the extension that you want to add: quarkus extension add '<extension>' To search for a specific extension, enter the extension name or partial name after -Dextensions= . The following example searches for extensions that contain the text agroal in the name: ./mvnw quarkus:add-extension -Dextensions=agroal This command returns the following result: Similarly, with the Quarkus CLI, you might enter: quarkus extension add 'agroal' 1.8. Importing your project into an IDE Although you can develop your Red Hat build of Quarkus project in a text editor, you might find using an integrated development environment (IDE) easier. The following instructions show you how to import your project into specific IDEs. Prerequisites You have a Quarkus Maven project. You have installed the Quarkus command-line interface (CLI), which is required to start your project in dev mode. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure Complete the required procedure for your IDE. CodeReady Studio or Eclipse In CodeReady Studio or Eclipse, click File >*Import*. Select Maven Existing Maven Project . , select the root location of the project. A list of the available modules appears. Select the generated project, and click Finish . To compile and start your application, use one of the following ways: Using Maven: Using the Quarkus CLI: IntelliJ In IntelliJ, complete one of the following tasks: Select File > New > Project From Existing Sources . On the Welcome page, select Import project . Select the project root directory. Select Import project from external model , and then select Maven . Review the options, and then click . Click Create . To compile and start your application, use one of the following ways: Using Maven: Using the Quarkus CLI: Apache NetBeans Select File > Open Project . Select the project root directory. Click Open Project . To compile and start your application, use one of the following ways: Using Maven: Using the Quarkus CLI: Visual Studio Code Install the Java Extension Pack. In Visual Studio Code, open your project directory. Verification The project loads as a Maven project. 1.9. Configuring the Red Hat build of Quarkus project output Before you build your application, you can control the build command output by changing the default values of the properties in the application.properties file. Prerequisites You have created a Quarkus Maven project. Procedure Go to the {project}/src/main/resources folder, and open the application.properties file in a text editor. Edit the values of properties that you want to change and save the file. The following table lists the properties that you can change: Property Description Type Default quarkus.package.main-class The entry point of the application. In most cases, you must change this value. string io.quarkus.runner.GeneratedMain quarkus.package.type The requested output type for the package, which you can set to 'jar' (uses 'fast-jar'), 'legacy-jar' for the pre-1.12 default jar packaging, 'uber-jar', 'native', or 'native-sources'. string jar quarkus.package.manifest.add-implementation-entries Determines whether the implementation information must be included in the runner JAR file's MANIFEST.MF file. boolean true quarkus.package.user-configured-ignored-entries Files that must not be copied to the output artifact. string (list) quarkus.package.runner-suffix The suffix that is applied to the runner JAR file. string -runner quarkus.package.output-directory The output folder for the application build. This is resolved relative to the build system target directory. string quarkus.package.output-name The name of the final artifact. string 1.10. Testing your Red Hat build of Quarkus application in JVM mode with a custom profile Similar to any other running mode, configuration values for testing are read from the src/main/resources/application.properties file. By default, the test profile is active during testing in JVM mode, meaning that properties prefixed with %test take precedence. For example, when you run a test with the following configuration, the value returned for the property message is Test Value . If the %test profile is inactive (dev, prod), the value returned for the property message is Hello . For example, your application might require multiple test profiles to run a set of tests against different database instances. To do this, you must override the testing profile name, which can be done by setting the system property quarkus.test.profile when executing Maven. By doing so, you can control which sets of configuration values are active during the test. To learn more about standard testing with the 'Starting With Quarkus' example, see Testing your Red Hat build of Quarkus application with JUnit in the Getting Started guide. Prerequisites A Quarkus project created with Apache Maven. Procedure When running tests on a Quarkus application, the test configuration profile is set as active by default. However, you can change the profile to a custom profile by using the quarkus.test.profile system property. Run the following command to test your application: Note You cannot use a custom test configuration profile in native mode. Native tests always run under the prod profile. 1.11. Logging the Red Hat build of Quarkus application build classpath tree The Quarkus build process adds deployment dependencies of the extensions that you use in the application to the original application classpath. You can see which dependencies and versions are included in the build classpath. The quarkus-bootstrap Maven plugin includes the build-tree goal, which displays the build dependency tree for the application. Prerequisites You have created a Quarkus Maven application. Procedure To list the build dependency tree of your application, enter the following command: Example output. The exact output you see will differ from this example. [INFO] └─ io.quarkus:quarkus-resteasy-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] ├─ io.quarkus:quarkus-resteasy-server-common-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ ├─ io.quarkus:quarkus-resteasy-common-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ │ ├─ io.quarkus:quarkus-resteasy-common:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ │ │ ├─ org.jboss.resteasy:resteasy-core:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ jakarta.xml.bind:jakarta.xml.bind-api:jar:4.0.0.redhat-00008 (compile) [INFO] │ │ │ │ ├─ org.jboss.resteasy:resteasy-core-spi:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ org.reactivestreams:reactive-streams:jar:1.0.4.redhat-00003 (compile) [INFO] │ │ │ │ └─ com.ibm.async:asyncutil:jar:0.1.0.redhat-00010 (compile) ... Note The mvn dependency:tree command displays only the runtime dependencies of your application 1.12. Producing a native executable A native binary is an executable that is created to run on a specific operating system and CPU architecture. The following list outlines some examples of a native executable: An ELF binary for Linux AMD 64 bits An EXE binary for Windows AMD 64 bits An ELF binary for ARM 64 bits When you build a native executable, one advantage is that your application and dependencies, including the JVM, are packaged into a single file. The native executable for your application contains the following items: The compiled application code. The required Java libraries. A reduced version of the Java virtual machine (JVM) for improved application startup times and minimal disk and memory footprint, which is also tailored for the application code and its dependencies. To produce a native executable from your Quarkus application, you can select either an in-container build or a local-host build. The following table explains the different building options that you can use: Table 1.1. Building options for producing a native executable Building option Requires Uses Results in Benefits In-container build - Supported A container runtime, for example, Podman or Docker The default registry.access.redhat.com/quarkus/mandrel-23-rhel8:23.0 builder image A Linux 64-bit executable using the CPU architecture of the host GraalVM does not need to be set up locally, which makes your CI pipelines run more efficiently Local-host build - Only supported upstream A local installation of GraalVM or Mandrel Its local installation as a default for the quarkus.native.builder-image property An executable that has the same operating system and CPU architecture as the machine on which the build is executed An alternative for developers that are not allowed or do not want to use tools such as Docker or Podman. Overall, it is faster than the in-container build approach. Important Red Hat build of Quarkus 3.2 only supports the building of native Linux executables by using a Java 17-based Red Hat build of Quarkus Native builder image, which is a productized distribution of Mandrel . While other images are available in the community, they are not supported in the product, so you should not use them for production builds that you want Red Hat to provide support for. Applications whose source is written based on Java 11, with no Java 12 - 17 features used, can still compile a native executable of that application using the Java 17-based Mandrel 23.0 base image. Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 1.12.1. Producing a native executable by using an in-container build To create a native executable and run the native image tests, use the native profile that is provided by Red Hat build of Quarkus for an in-container build. Prerequisites Podman or Docker is installed. The container has access to at least 8GB of memory. Procedure Open the Getting Started project pom.xml file, and verify that the project includes the native profile: <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> Build a native executable by using one of the following ways: Using Maven: For Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true For Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Using the Quarkus CLI: For Docker: quarkus build --native -Dquarkus.native.container-build=true For Podman: quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary produced by Quarkus. The target directory is a directory that Maven creates when you build a Maven application. Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. To run the native executable, enter the following command: ./target/*-runner Additional resources Native executable configuration properties 1.12.2. Producing a native executable by using a local-host build If you are not using Docker or Podman, use the Quarkus local-host build option to create and run a native executable. Using the local-host build approach is faster than using containers and is suitable for machines that use a Linux operating system. Important Using the following procedure in production is not supported by Red Hat build of Quarkus. Use this method only when testing or as a backup approach when Docker or Podman is not available. Prerequisites A local installation of Mandrel or GraalVm, correctly configured according to the Building a native executable guide. Additionally, for a GraalVM installation, native-image must also be installed. Procedure For GraalVM or Mandrel, build a native executable by using one of the following ways: Using Maven: ./mvnw package -Dnative Using the Quarkus CLI: quarkus build --native Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary produced by Quarkus. The target directory is a directory that Maven creates when you build a Maven application. Note When you build the native executable, the prod profile is enabled unless modified in the quarkus.profile property. Run the native executable: ./target/*-runner Additional resources For more information, see the Producing a native executable section of the "Building a native executable" guide in the Quarkus community. 1.12.3. Creating a container manually This section shows you how to manually create a container image with your application for Linux AMD64. When you produce a native image by using the Quarkus Native container, the native image creates an executable that targets Linux AMD64. If your host operating system is different from Linux AMD64, you cannot run the binary directly and you need to create a container manually. Your Quarkus Getting Started project includes a Dockerfile.native in the src/main/docker directory with the following content: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT ["./application", "-Dquarkus.http.host=0.0.0.0"] Note Universal Base Image (UBI) The following list displays the suitable images for use with Dockerfiles. Red Hat Universal Base Image 8 (UBI8). This base image is designed and engineered to be the base layer for all of your containerized applications, middleware, and utilities. Red Hat Universal Base Image 8 Minimal (UBI8-minimal). A stripped-down UBI8 image that uses microdnf as a package manager. All Red Hat Base images are available on the Container images catalog site. Procedure Build a native Linux executable by using one of the following methods: Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Build the container image by using one of the following methods: Docker: docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Podman podman build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started . Run the container by using one of the following methods: Docker: docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started Podman: podman run -i --rm -p 8080:8080 quarkus-quickstart/getting-started 1.13. Testing the native executable Test the application in native mode to test the functionality of the native executable. Use the @QuarkusIntegrationTest annotation to build the native executable and run tests against the HTTP endpoints. Important The following example shows how to test a native executable with a local installation of GraalVM or Mandrel. Before you begin, consider the following points: This scenario is not supported by Red Hat build of Quarkus, as outlined in Producing a native executable . The native executable you are testing with here must match the operating system and architecture of the host. Therefore, this procedure will not work on a macOS or an in-container build. Procedure Open the pom.xml file and verify that the build section has the following elements: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> The Maven Failsafe plugin ( maven-failsafe-plugin ) runs the integration test and indicates the location of the native executable that is generated. Open the src/test/java/org/acme/GreetingResourceIT.java file and verify that it includes the following content: package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. } 1 Use another test runner that starts the application from the native file before the tests. The executable is retrieved by using the native.image.path system property configured in the Maven Failsafe plugin. 2 This example extends the GreetingResourceTest , but you can also create a new test. Run the test: ./mvnw verify -Dnative The following example shows the output of this command: ./mvnw verify -Dnative .... GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable)... ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.0.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total .... [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2023-08-28 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.038s. Listening on: http://0.0.0.0:8081 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started --- Note Quarkus waits 60 seconds for the native image to start before automatically failing the native tests. You can change this duration by configuring the quarkus.test.wait-time system property. You can extend the wait time by using the following command where <duration> is the wait time in seconds: Note Native tests run using the prod profile by default unless modified in the quarkus.test.native-image-profile property. 1.14. Using Red Hat build of Quarkus development mode Development mode enables hot deployment with background compilation, which means that when you modify your Java or resource files and then refresh your browser, the changes automatically take effect. This also works for resource files such as the configuration property file. You can use either Maven or the Quarkus command-line interface (CLI) to start Quarkus in development mode. Prerequisites You have created a Quarkus Maven application. You have installed the Quarkus CLI, which is one of the methods you can use to start Quarkus in development mode. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure Switch to the directory that contains your Quarkus application pom.xml file. To compile and start your Quarkus application in development mode, use one of the following methods: Using Maven: Using the Quarkus CLI: Make changes to your application and save the files. Refresh the browser to trigger a scan of the workspace. If any changes are detected, the Java files are recompiled and the application is redeployed. Your request is then serviced by the redeployed application. If there are any issues with compilation or deployment, an error page appears. In development mode, the debugger is activated and listens on port 5005 . Optional: To wait for the debugger to attach before running the application, include -Dsuspend : Optional: To prevent the debugger from running, include -Ddebug=false : 1.15. Debugging your Red Hat build of Quarkus project When Red Hat build of Quarkus starts in development mode, debugging is enabled by default, and the debugger listens on port 5005 without suspending the JVM. You can enable and configure the debugging feature of Quarkus from the command line or by configuring the system properties. You can also use the Quarkus CLI to debug your project. Prerequisites You have created a Red Hat build of Quarkus Maven project. You have installed the Quarkus command-line interface (CLI), which is one of the methods you can use to compile and debug your project. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure Use one of the following methods to control debugging: Controlling the debugger by configuring system properties Change one of the following values of the debug system property where PORT is the port that the debugger is listening on: false : The JVM starts with debug mode disabled. true : The JVM starts in debug mode and is listening on port 5005 . client : The JVM starts in client mode and tries to connect to localhost:5005 . PORT : The JVM starts in debug mode and is listening on PORT . To suspend the JVM while running in debug mode, set the value of the suspend system property to one of the following values: y or true : The debug mode JVM launch suspends. n or false : The debug mode JVM starts without suspending. Controlling the debugger from the command line To compile and start your Quarkus application in debug mode with a suspended JVM, use one of the following ways Using Maven: Using the Quarkus CLI: Enabling the debugger for specific host network interfaces In development mode, by default, for security reasons, Quarkus sets the debug host interface to localhost . To enable the debugger for a specific host network interface, you can use the -DdebugHost option by using one of the following ways: Using Maven: Using the Quarkus CLI: Where <host-ip-address> is the IP address of the host network interface that you want to enable debugging on. Note To enable debugging on all host interfaces, replace <host-ip-address> with the following value: 1.16. Additional resources Getting Started with Quarkus Apache Maven project Revised on 2024-10-10 17:19:21 UTC
|
[
"<!-- Configure the Red Hat build of Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"mvn --version",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -DprojectGroupId=<project_group_id> -DprojectArtifactId=<project_artifact_id> -DplatformGroupId=com.redhat.quarkus.platform -DplatformArtifactId=quarkus-bom -DplatformVersion=3.2.12.SP1-redhat-00003 -DpackageName=getting.started",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create",
"quarkus create app my-groupId:my-artifactId --package-name=getting.started",
"quarkus create app --help",
"<properties> <compiler-plugin.version>3.11.0</compiler-plugin.version> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.2.11.Final-redhat-00001</quarkus.platform.version> <surefire-plugin.version>3.1.2</surefire-plugin.version> <skipITs>true</skipITs> </properties>",
"<dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build>",
"<build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner </native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>",
"cd <directory_name>",
"./mvnw quarkus:dev",
"quarkus dev",
"<plugin> <groupId>com.redhat.quarkus.platform</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <configuration> <source>USD{maven.compiler.source}</source> <target>USD{maven.compiler.target}</target> <compilerArgs> <arg>-verbose</arg> </compilerArgs> <jvmArgs>-verbose</jvmArgs> </configuration> </plugin>",
"./mvnw quarkus:list-extensions",
"quarkus extension --installable",
"./mvnw quarkus:add-extension -Dextensions=\"<extension>\"",
"./mvnw quarkus:add-extension -Dextensions=\"io.quarkus:quarkus-agroal\"",
"quarkus extension add '<extension>'",
"./mvnw quarkus:add-extension -Dextensions=agroal",
"[SUCCESS] ✅ Extension io.quarkus:quarkus-agroal has been installed",
"quarkus extension add 'agroal'",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev",
"quarkus dev",
"message=Hello %test.message=Test Value",
"mvn test -Dquarkus.test.profile=__<profile-name>__",
"./mvnw quarkus:dependency-tree",
"[INFO] └─ io.quarkus:quarkus-resteasy-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] ├─ io.quarkus:quarkus-resteasy-server-common-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ ├─ io.quarkus:quarkus-resteasy-common-deployment:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ │ ├─ io.quarkus:quarkus-resteasy-common:jar:3.2.11.Final-redhat-00001 (compile) [INFO] │ │ │ ├─ org.jboss.resteasy:resteasy-core:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ jakarta.xml.bind:jakarta.xml.bind-api:jar:4.0.0.redhat-00008 (compile) [INFO] │ │ │ │ ├─ org.jboss.resteasy:resteasy-core-spi:jar:6.2.4.Final-redhat-00003 (compile) [INFO] │ │ │ │ ├─ org.reactivestreams:reactive-streams:jar:1.0.4.redhat-00003 (compile) [INFO] │ │ │ │ └─ com.ibm.async:asyncutil:jar:0.1.0.redhat-00010 (compile)",
"<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"quarkus build --native -Dquarkus.native.container-build=true",
"quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"./target/*-runner",
"./mvnw package -Dnative",
"quarkus build --native",
"./target/*-runner",
"FROM registry.access.redhat.com/ubi8/ubi-minimal:8.8 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]",
"registry.access.redhat.com/ubi8/ubi:8.8",
"registry.access.redhat.com/ubi8/ubi-minimal:8.8",
"./mvnw package -Dnative -Dquarkus.native.container-build=true",
"./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman",
"docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .",
"build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .",
"docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started",
"run -i --rm -p 8080:8080 quarkus-quickstart/getting-started",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>",
"package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }",
"./mvnw verify -Dnative",
"./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.0.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2023-08-28 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.038s. Listening on: http://0.0.0.0:8081 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2023-08-28 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---",
"./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>",
"./mvnw quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev -Dsuspend",
"./mvnw quarkus:dev -Ddebug=false",
"./mvnw quarkus:dev -Dsuspend",
"quarkus dev -Dsuspend",
"./mvnw quarkus:dev -DdebugHost=<host-ip-address>",
"quarkus dev -DdebugHost=<host-ip-address>",
"0.0.0.0"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/assembly_quarkus-maven_quarkus-maven
|
Part II. Administration
|
Part II. Administration This part covers topics related to virtual machines administration and explains how virtualization features, such as virtual networking, storage, and PCI assignment work. This part also provides instruction on device and guest virtual machine management with the use of qemu-img , virt-manager , and virsh tools.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/part-administration
|
7.4. Assigning and Managing Unique Numeric Attribute Values
|
7.4. Assigning and Managing Unique Numeric Attribute Values Some entry attributes require having a unique number, such as uidNumber and gidNumber . The Directory Server can automatically generate and supply unique numbers for specified attributes using the Distributed Numeric Assignment (DNA) Plug-in. Note Attribute uniqueness is not necessarily preserved with the DNA Plug-in. The plug-in only assigns non-overlapping ranges, but it does allow manually-assigned numbers for its managed attributes, and it does not verify or require that the manually-assigned numbers are unique. The issue with assigning unique numbers is not with generating the numbers but in effectively avoiding replication conflicts. The DNA Plug-in assigns unique numbers across a single back end. For multi-supplier replication, when each supplier is running a local DNA Plug-in instance, there has to be a way to ensure that each instance is using a truly unique set of numbers. This is done by assigning different ranges of numbers to each server to assign. 7.4.1. About Dynamic Number Assignments The DNA Plug-in for a server assigns a range of available numbers that that instance can issue. The range definition is very simple and is set by two attributes: the server's available number (the low end of the range) and its maximum value (the top end of the range). The initial bottom range is set when the plug-in instance is configured. After that, the bottom value is updated by the plug-in. By breaking the available numbers into separate ranges on each replica, the servers can all continually assign numbers without overlapping with each other. 7.4.1.1. Filters, Searches, and Target Entries The server performs a sorted search, internally, to see if the specified range is already taken, requiring the managed attribute to have an equality index with the proper ordering matching rule (as described in Section 13.2, "Creating Standard Indexes" ). The DNA Plug-in is applied, always, to a specific area of the directory tree (the scope ) and to specific entry types within that subtree (the filter ). Important The DNA Plug-in only works on a single back end; it cannot manage number assignments for multiple databases. The DNA plug-in uses the sort control when checking whether a value has already been manually allocated outside of the DNA Plug-in. This validation, using the sort control, only works on a single back end. 7.4.1.2. Ranges and Assigning Numbers There are several different ways that the Directory Server can handle generating attribute values: In the simplest case, a user entry is added to the directory with an object class which requires the unique-number attribute, but without the attribute present. Adding an entry with no value for the managed attribute triggers the DNA Plug-in to assign a value. This option only works if the DNA Plug-in has been configured to assign unique values to a single attribute. A similar and more manageable option is to use a magic number . This magic number is a template value for the managed attribute, something outside the server's range, a number or even a word, that the plug-in recognizes it needs to replace with a new assigned value. When an entry is added with the magic value and the entry is within the scope and filter of the configured DNA Plug-in, then using the magic number automatically triggers the plug-in to generate a new value. The following example, based on using ldapmodify , adds 0 as a magic number: The DNA Plug-in only generates new, unique values. If an entry is added or modified to use a specific value for an attribute controlled by the DNA Plug-in, the specified number is used; the DNA Plug-in will not overwrite it. 7.4.1.3. Multiple Attributes in the Same Range The DNA Plug-in can assign unique numbers to a single attribute type or across multiple attribute types from a single range of unique numbers. This provides several options for assigning unique numbers to attributes: A single number assigned to a single attribute type from a single range of unique numbers. The same unique number assigned to two attributes for a single entry. Two different attributes assigned two different numbers from the same range of unique numbers. In many cases, it is sufficient to have a unique number assigned per attribute type. When assigning an employeeID to a new employee entry, it is important each employee entry is assigned a unique employeeID . However, there are cases where it may be useful to assign unique numbers from the same range of numbers to multiple attributes. For example, when assigning a uidNumber and a gidNumber to a posixAccount entry, the DNA Plug-in will assign the same number to both attributes. To do this, then pass both managed attributes to the modify operation, specifying the magic value. Using ldapmodify : When multiple attributes are handled by the DNA Plug-in, the plug-in can assign a unique value to only one of those attributes if the object class only allows one of them. For example, the posixGroup object class does not allow a uidNumber attribute but it does allow gidNumber . If the DNA Plug-in manages both uidNumber and gidNumber , then when a posixGroup entry is created, a unique number for gidNumber is assigned from the same range as the uidNumber and gidNumber attributes. Using the same pool for all attributes manged by the plug-in keeps the assignment of unique numbers aligned and prevents situations where a uidNumber and a gidNumber on different entries are assigned from different ranges and result in the same unique number. If multiple attributes are handled by the DNA Plug-in, then the same value will be assigned to all of the given managed attributes in an entry in a single modify operation. To assign different numbers from the same range, then you must perform separate modify operations. The following example uses ldapmodify to do so: Important When the DNA Plug-in is configured to assign unique numbers to multiple attributes, it is necessary to specify the magic value for each attribute that requires the unique number. While this is not necessary when the DNA plug-in has been configured to provide unique numbers for a single attribute, it is necessary for multiple attributes. There may be instances where an entry does not allow each type of attribute defined for the range, or, more important, an entry allow all of the attributes types defined, but only a subset of the attributes require the unique value. Example 7.6. DNA and Unique Bank Account Numbers Example Bank wants to use the same unique number for a customer's primaryAccount and customerID attributes. The Example Bank administrator configured the DNA Plug-in to assign unique values for both attributes from the same range. The bank also wants to assign numbers for secondary accounts from the same range as the customer ID and primary account numbers, but these numbers cannot be the same as the primary account numbers. The Example Bank administrator configures the DNA Plug-in to also manage the secondaryAccount attribute, but will only add the secondaryAccount attribute to an entry after the entry is created and the primaryAccount and customerID attributes are assigned. This ensures that primaryAccount and customerID share the same unique number, and any secondaryAccount numbers are entirely unique but still from the same range of numbers. 7.4.2. Looking at the DNA Plug-in Syntax The DNA Plug-in itself is a container entry, similar to the Password Storage Schemes Plug-in. Each DNA entry underneath the DNA Plug-in entry defines a new managed range for the DNA Plug-in. To set new managed ranges for the DNA Plug-in, create entries beneath the container entry. The most basic configuration is to set up distributed numeric assignments on a single server, meaning the ranges will not be shared or transferred between servers. A basic DNA configuration entry defines four things: The attribute that value is being managed, set in the dnaType attribute The entry DN to use as the base to search for entries, set in the dnaScope attribute The search filter to use to identify entries to manage, set in the dnaFilter attribute The available value to assign, set in the dnaNextValue attribute (after the entry is created, this is handled by the plug-in) For a list of attributes supported in the cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config entry, see the Red Hat Directory Server Configuration, Command, and File Reference . To configure distributed numeric assignment on a single server for a single attribute type: If multiple suppliers are configured for distributed numeric assignments, then the entry must contain the required information to transfer ranges: The maximum number that the server can assign; this sets the upward bound for the range, which is logically required when multiple servers are assigning numbers. This is set in the dnaMaxValue attribute. The threshold where the range is low enough to trigger a range transfer, set in the dnaThreshold attribute. If this is not set, the default value is 1 . A timeout period so that the server does not hang waiting for a transfer, set in the dnaRangeRequestTimeout attribute. If this is not set, the default value is 10 , meaning 10 seconds. A configuration entry DN which is shared among all supplier servers, which stores the range information for each supplier, set in the dnaSharedCfgDN attribute. The specific number range which could be assigned by the server is defined in the dnaNextRange attribute. This shows the available range for transfer and is managed automatically by the plug-in, as ranges are assigned or used by the server. This range is just "on deck." It has not yet been assigned to another server and is still available for its local Directory Server to use. The dnaNextRange attribute should be set explicitly only if a separate, specific range has to be assigned to other servers. Any range set in the dnaNextRange attribute must be unique from the available range for the other servers to avoid duplication. If there is no request from the other servers and the server where dnaNextRange is set explicitly has reached its set dnaMaxValue , the set of values (part of the dnaNextRange ) is allocated from this deck. The dnaNextRange allocation is also limited by the dnaThreshold attribute that is set in the DNA configuration. Any range allocated to another server for dnaNextRange cannot violate the threshold for the server, even if the range is available on the deck of dnaNextRange . Note If the dnaNextRange attribute is handled internally if it is not set explicitly. When it is handled automatically, the dnaMaxValue attribute serves as upper limit for the range. Each supplier keeps a track of its current range in a separate configuration entry which contains information about its range and its connection settings. This entry is a child of the location in dnasharedcfgdn . The configuration entry is replicated to all of the other suppliers, so each supplier can check that configuration to find a server to contact for a new range. For example: 7.4.3. Configuring Unique Number Assignments The unique number distribution is configured by creating different instances of the DNA Plug-in. 7.4.3.1. Creating a New Instance of the DNA Plug-in To use the DNA with multiple configurations, create a new instance of the plug-in for each configuration. Note You can create new instances of the plug-in only by using the command line. However, you can edit the settings using both the command line and the web console. To create and enabling a new instance of the plug-in: For example, to create a new instance of the plug-in: For details about the value you can set in the --magic-regen parameter, see the dnaMagicRegen attribute description in the Configuration, Command and File Reference . Enable the DNA plug-in. For details, see Section 1.10.2, "Enabling and Disabling Plug-ins" . 7.4.3.2. Configuring Unique Number Assignments Using the Command Line Note Any attribute which has a unique number assigned to it must have an equality index set for it. The server must perform a sorted search, internally, to see if the dnaNextvalue is already taken, which requires an equality index on an integer attribute, with the proper ordering matching rule. Creating indexes is described in Section 13.2, "Creating Standard Indexes" . Note Set up the DNA Plug-in on every supplier server, and be careful not to overlap the number range values. Create a new instance of the plug-in. See Section 7.4.3.1, "Creating a New Instance of the DNA Plug-in" . Create the shared container entry in the replicated subtree: Restart the instance: 7.4.3.3. Configuring Unique Number Assignments Using the Web Console To enable and configure the DNA plug-in using the web console: Create a new instance of the plug-in. See Section 7.4.3.1, "Creating a New Instance of the DNA Plug-in" . Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the DNA plug-in. Change the status to ON to enable the plug-in. Click Add Config . Fill the fields, and enable the config. Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 7.4.4. Distributed Number Assignment Plug-in Performance Notes There can be thread locking issues as DNA configuration is changed dynamically, so that new operations which access the DNA configuration (such as a DNA task or additional changes to the DNA configuration) will access the old configuration because the thread with the new configuration has not yet been released. This can cause operations to use old configuration or simply cause operations to hang. To avoid this, preserve an interval between dynamic DNA configuration changes of 35 seconds. This means have a sleep or delay between both DNA configuration changes and any directory entry changes which would trigger a DNA plug-in operation.
|
[
"dn: uid=jsmith,ou=people,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: posixAccount uid: jsmith cn: John Smith uidNumber: 0 gidNumber: 0 .",
"ldapmodify -D \"cn=Directory Manager\" -W -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber uidNumber: 0 - add:gidNumber gidNumber: 0",
"ldapmodify -D \"cn=Directory Manager\" -W -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber uidNumber: 0 ^D ldapmodify -D \"cn=Directory Manager\" -W -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: employeeId employeeId: magic",
"dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=people,dc=example,dc=com dnaNextValue: 1",
"dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=People,dc=example,dc=com dnanextvalue: 1 dnaMaxValue: 1300 dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com dnathreshold: 100 dnaRangeRequestTimeout: 60 dnaNextRange: 1301-2301",
"dn: dnaHostname=ldap1.example.com+dnaPortNum=389,cn=Account UIDs,ou=Ranges,dc=example,dc=com objectClass: dnaSharedConfig objectClass: top dnahostname: ldap1.example.com dnaPortNum: 389 dnaSecurePortNum: 636 dnaRemainingValues: 1000",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin dna config \" Account UIDs \" add --type uidNumber --filter \"(objectclass=posixAccount)\" --scope ou=People,dc=example,dc=com --next-value 1 --max-value 1300 --shared-config-entry \"cn=Account UIDs,ou=Ranges,dc=example,dc=com\" --threshold 100 --range-request-timeout 60 --magic-regen magic",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject objectclass: organizationalUnit ou: Ranges - dn: cn=Account UIDs,ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject cn: Account UIDs",
"dsctl instance_name restart"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/dna
|
Chapter 9. Set Up Invalidation Mode
|
Chapter 9. Set Up Invalidation Mode 9.1. About Invalidation Mode Invalidation is a clustered mode that does not share any data, but instead removes potentially obsolete data from remote caches. Using this cache mode requires another, more permanent store for the data such as a database. Red Hat JBoss Data Grid, in such a situation, is used as an optimization for a system that performs many read operations and prevents database usage each time a state is needed. When invalidation mode is in use, data changes in a cache prompts other caches in the cluster to evict their outdated data from memory. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_invalidation_mode
|
Post-installation configuration
|
Post-installation configuration OpenShift Container Platform 4.10 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/index
|
Chapter 5. Migration
|
Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.5. 5.1. Migrating to MariaDB 10.3 The rh-mariadb103 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb103 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb103 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Collection is still installed and even running. The rh-mariadb103 Software Collection includes the rh-mariadb103-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb103*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb103* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , and Migrating to MariaDB 10.2 . Note The rh-mariadb103 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb102 and rh-mariadb103 Software Collections The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. For more information regarding MariaDB 10.3 , see the upstream documentation about changes and about upgrading . 5.1.2. Upgrading from the rh-mariadb102 to the rh-mariadb103 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb102 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb102 server. systemctl stop rh-mariadb102-mariadb.service Install the rh-mariadb103 Software Collection, including the subpackage providing the mysql_upgrade utility. yum install rh-mariadb103-mariadb-server rh-mariadb103-mariadb-server-utils Note that it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb103 , which is stored in the /etc/opt/rh/rh-mariadb103/my.cnf file and the /etc/opt/rh/rh-mariadb103/my.cnf.d/ directory. Compare it with configuration of rh-mariadb102 stored in /etc/opt/rh/rh-mariadb102/my.cnf and /etc/opt/rh/rh-mariadb102/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb102 Software Collection is stored in the /var/opt/rh/rh-mariadb102/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb103/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb103 database server. systemctl start rh-mariadb103-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb103 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb103 -- mysql_upgrade -p Note that when the rh-mariadb103*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. 5.2. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.5 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 and Migrating to MariaDB 10.1 . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.2.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in the Red Hat Software Collections 3.0 Release Notes . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.2.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p Note that when the rh-mariadb102*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. 5.3. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. For instructions, see Migration to MySQL 5.7 . 5.3.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.3.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. 5.4. Migrating to MongoDB 3.6 Red Hat Software Collections 3.5 is released with MongoDB 3.6 , provided by the rh-mongodb36 Software Collection and available only for Red Hat Enterprise Linux 7. The rh-mongodb36 Software Collection includes the rh-mongodb36-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb36*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb36* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.4.1. Notable Differences Between MongoDB 3.4 and MongoDB 3.6 General Changes The rh-mongodb36 Software Collection introduces the following significant general change: On Non-Uniform Access Memory (NUMA) hardware, it is possible to configure systemd services to be launched using the numactl command; see the upstream recommendation . To use MongoDB with the numactl command, you need to install the numactl RPM package and change the /etc/opt/rh/rh-mongodb36/sysconfig/mongod and /etc/opt/rh/rh-mongodb36/sysconfig/mongos configuration files accordingly. Compatibility Changes MongoDB 3.6 includes various minor changes that can affect compatibility with versions of MongoDB : MongoDB binaries now bind to localhost by default, so listening on different IP addresses needs to be explicitly enabled. Note that this is already the default behavior for systemd services distributed with MongoDB Software Collections. The MONGODB-CR authentication mechanism has been deprecated. For databases with users created by MongoDB versions earlier than 3.0, upgrade authentication schema to SCRAM . The HTTP interface and REST API have been removed Arbiters in replica sets have priority 0 Master-slave replication has been deprecated For detailed compatibility changes in MongoDB 3.6 , see the upstream release notes . Backwards Incompatible Features The following MongoDB 3.6 features are backwards incompatible and require the version to be set to 3.6 using the featureCompatibilityVersion command : UUID for collections USDjsonSchema document validation Change streams Chunk aware secondaries View definitions, document validators, and partial index filters that use version 3.6 query features Sessions and retryable writes Users and roles with authenticationRestrictions For details regarding backward incompatible changes in MongoDB 3.6 , see the upstream release notes . 5.4.2. Upgrading from the rh-mongodb34 to the rh-mongodb36 Software Collection Important Before migrating from the rh-mongodb34 to the rh-mongodb36 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb34/lib/mongodb/ directory. In addition, see the Compatibility Changes to ensure that your applications and deployments are compatible with MongoDB 3.6 . To upgrade to the rh-mongodb36 Software Collection, perform the following steps. To be able to upgrade, the rh-mongodb34 instance must have featureCompatibilityVersion set to 3.4 . Check featureCompatibilityVersion : ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Install the MongoDB servers and shells from the rh-mongodb36 Software Collections: ~]# yum install rh-mongodb36 Stop the MongoDB 3.4 server: ~]# systemctl stop rh-mongodb34-mongod.service Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb34/lib/mongodb/* /var/opt/rh/rh-mongodb36/lib/mongodb/ Configure the rh-mongodb36-mongod daemon in the /etc/opt/rh/rh-mongodb36/mongod.conf file. Start the MongoDB 3.6 server: ~]# systemctl start rh-mongodb36-mongod.service Enable backwards incompatible features: ~]USD scl enable rh-mongodb36 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Note After upgrading, it is recommended to run the deployment first without enabling the backwards incompatible features for a burn-in period of time, to minimize the likelihood of a downgrade. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.5. Migrating to MongoDB 3.4 The rh-mongodb34 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, provides MongoDB 3.4 . 5.5.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.5.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.6. Migrating to PostgreSQL 12 Red Hat Software Collections 3.5 is distributed with PostgreSQL 12 , available only for Red Hat Enterprise Linux 7. The rh-postgresql12 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. See Section 5.7, "Migrating to PostgreSQL 9.6" for instructions how to migrate to an earlier version or when using Red Hat Enterprise Linux 6. The rh-postgresql12 Software Collection includes the rh-postgresql12-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl12*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl12* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 12 , see the upstream compatibility notes for PostgreSQL 11 and PostgreSQL 12 . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql10 and rh-postgresql12 Software Colections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql10 rh-postgresql12 Executables /usr/bin/ /opt/rh/rh-postgresql10/root/usr/bin/ /opt/rh/rh-postgresql12/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql10/root/usr/lib64/ /opt/rh/rh-postgresql12/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql10/lib/pgsql/data/ /var/opt/rh/rh-postgresql12/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql10/lib/pgsql/backups/ /var/opt/rh/rh-postgresql12/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql10/root/usr/include/pgsql/ /opt/rh/rh-postgresql12/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.6.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 12 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql12 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 12, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.6.2. Migrating from the PostgreSQL 10 Software Collection to the PostgreSQL 12 Software Collection To migrate your data from the rh-postgresql10 Software Collection to the rh-postgresql12 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 10 to PostgreSQL 12 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql10/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql10-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql10-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgreqsql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql10-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql10 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql10-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7. Migrating to PostgreSQL 9.6 PostgreSQL 9.6 is available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and it can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. Important In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . Note that it is currently impossible to upgrade PostgreSQL from 9.5 to 9.6 in a container in an OpenShift environment that is configured with Gluster file volumes. 5.7.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.2. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.7.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.5. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.6. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.7. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.8. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.8. Migrating to nginx 1.16 The root directory for the rh-nginx116 Software Collection is located in /opt/rh/rh-nginx116/root/ . The error log is stored in /var/opt/rh/rh-nginx116/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx116/nginx/ directory. Configuration files in nginx 1.16 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx116/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.14 to nginx 1.16 , back up all your data, including web pages located in the /opt/rh/nginx114/root/ tree and configuration files located in the /etc/opt/rh/nginx114/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx114/root/ tree, replicate those changes in the new /opt/rh/rh-nginx116/root/ and /etc/opt/rh/rh-nginx116/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.8 , nginx 1.10 , nginx 1.12 , or nginx 1.14 to nginx 1.16 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.9. Migrating to Redis 5 Redis 3.2 , provided by the rh-redis32 Software Collection, is mostly a strict subset of Redis 4.0 , which is mostly a strict subset of Redis 5.0 . Therefore, no major issues should occur when upgrading from version 3.2 to version 5.0. To upgrade a Redis Cluster to version 5.0, a mass restart of all the instances is needed. Compatibility Notes The format of RDB files has been changed. Redis 5 is able to read formats of all the earlier versions, but earlier versions are incapable of reading the Redis 5 format. Since version 4.0, the Redis Cluster bus protocol is no longer compatible with Redis 3.2 . For minor non-backward compatible changes, see the upstream release notes for version 4.0 and version 5.0 .
|
[
"[mysqld] default_authentication_plugin=caching_sha2_password"
] |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-migration
|
Chapter 19. Virtualization
|
Chapter 19. Virtualization ENA drivers for Amazon Web Services This update adds support for Amazon Elastic Network Adapter (ENA) drivers to the Red Hat Enterprise Linux 7 kernel. ENA significantly enhances networking efficiency of Red Hat Enterprise Linux 7 guest virtual machines for certain instance types of the Amazon Web Services cloud. For more information about ENA, see https://aws.amazon.com/blogs/aws/elastic-network-adapter-high-performance-network-interface-for-amazon-ec2 . (BZ# 1357491 , BZ# 1410047 ) Synthetic Hyper-V FC adapters are supported by the storvsc driver This update improves the way the storvsc driver handles Fibre Channel (FC) devices on Hyper-V virtualization. Notably, when a new synthetic Fibre Channel (FC) adapter is configured on a Hyper-V hypervisor, a new hostX (for example host1 ) file is created in the /sys/class/fc_host/ and /sys/class/scsi_host/ directories. This file contains the port_name and host_name entries determined by the Hyper-V FC Adapter world-wide port number (WWPN) and world-wide node number (WWNN). (BZ# 1308632 , BZ#1425469) Parent HBA can be defined by a WWNN/WWPN pair With this release, a parent host bus adapter (HBA) can be identified by a World Wide Node Name (WWNN) and World Wide Port Name (WWPN),in addition to a scsi_host# . When defined by a scsi_host# , if hardware is added to the host machine, the scsi_host# may change after the host machine reboots. By using a WWNN/WWPN pair, the assignment remains unchanged regardless of hardware changes to the host machine. (BZ#1349696) libvirt rebased to version 3.2.0 The libvirt packages have been upgraded to upstream version 3.2.0, which provides a number of bug fixes and enhancements over the version. Notable changes: This update makes it possible to install and uninstall specific libvirt storage sub-drivers, which reduces the installation footprint. You can now configure the /etc/nsswitch.conf file to instruct Name Services Switch (NSS) to automatically resolve names of KVM guests to their network addresses. (BZ# 1382640 ) KVM now supports MCE This update adds support for Machine Check Exception (MCE) to the KVM kernel modules, which makes it possible to use the Local MCE (LMCE) feature of Intel Xeon v5 processors in KVM guest virtual machines. LMCE can deliver MCE to a single processor thread instead of broadcasting to all threads, which ensures the machine check does not impact the performance of more vCPUs than needed. As a result, this reduces software load when processing MCE on machines with a large number of processor threads. (BZ# 1402102 , BZ#1402116) Added support for rx batching on tun/tap devices With this release, rx batching for tun/tap devices is now supported. This enables receiving bundled network frames which can improve performance. (BZ# 1414627 ) libguestfs rebased to version 1.36.3 The libguestfs packages have been upgraded to upstream version 1.36.3, which provides a number of bug fixes and enhancements over the version. Notable changes include: This update adds the virt-tail utility, which can be used to follow (tail) log files within a guest, similar to the tail -f command. For details, see the virt-tail(1) man page. The virt-v2v utility supports more operating systems and more input sources. In addition, the conversion of Windows guests has been substantially rewritten and simplified. Multiple options have been added for the virt-customize , virt-builder , and virt-systprep utilities. (BZ# 1359086 ) Improved virt-v2v installation of QXL drivers This update reworks the virt-v2v implementation of QXL driver installation in Windows guest virtual machines, which ensures that QXL drivers are installed correctly on these guests. (BZ# 1233093 , BZ# 1255610 , BZ# 1357427 , BZ# 1374651 ) virt-v2v can export disk images to qcow2 format 1.1 With this update, the virt-v2v utility exports disk images compatible with qcow2 format version 1.1 when using the -o rhev option. In addition, virt-v2v adds the --vdsm-compat=COMPAT option for the vdsm output mode. This option specifies which version of the qcow2 format virt-v2v uses when exporting images with the -o vdsm option. (BZ# 1400205 ) Additional virt tools can work on LUKS whole-disk encrypted guests This update adds support for working on LUKS whole-disk encrypted guests using the virt-customize , virt-get-kernel , virt-sparsify , and virt-sysprep tools. As a result, these tools can provide keys or passphrases for opening LUKS whole-disk encrypted guests. (BZ# 1362649 ) Tab completion for all libguestfs commands Bash completion scripts have been added for all libguestfs tools. As a result, it is now possible to use Tab completion in bash with every libguestfs command. (BZ# 1367738 ) Resized disks can be written directly to a remote location With this update, the virt-resize utility can write its output to a remote location. This may be useful, for example, in directly writing the resized disk image to a Ceph storage volume. The virt-resize output disk can be specified using a URI. Any supported input protocol and format can be used to specify the output. (BZ# 1404182 ) User namespace is now fully supported The user namespace feature, previously available as a Technology Preview, is now fully supported. It provides additional security to servers running Linux containers by providing better isolation between the host and the containers. Administrators of a container are no longer able to perform administrative operations on the host, which increases security. (BZ#1138782) Driver added for devices that connect over a PCI Express bus in guest virtual machine under Hyper-V In this update, a new driver was added that exposes a root PCI bus when a devices that connects over a PCI Express bus is passed through to a Red Hat Enterprise Linux guest virtual machine running on the Hyper-V hypervisor. The feature is currently supported with Microsoft Windows Server 2016. (BZ#1302147)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_virtualization
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/making-open-source-more-inclusive
|
Chapter 5. Installing a cluster quickly on Azure
|
Chapter 5. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.12, you can install a cluster on Microsoft Azure that uses the default configuration options. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file, which contains Microsoft Azure profile information, in the ~/.azure/ directory on your computer, the installer prompts you to specify the following Azure parameter values for your subscription and service principal. azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Important After you enter values for the previously listed parameters, the installation program creates a osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. These actions ensure that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-default
|
Chapter 116. KafkaUserScramSha512ClientAuthentication schema reference
|
Chapter 116. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserTlsExternalClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Property type Description type string Must be scram-sha-512 . password Password Specify the password for the user. If not set, a new password is generated by the User Operator.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaUserScramSha512ClientAuthentication-reference
|
3.9. Additional Configuration for the Active Directory Domain Entry
|
3.9. Additional Configuration for the Active Directory Domain Entry Custom settings for each individual domain can be defined in the /etc/realmd.conf file. Each domain can have its own configuration section; the name of the section must match the domain name. For example: Important Changing the configuration as described in this section only works if the realm join command has not been run yet. If a system is already joined, changing these settings does not have any effect. In such situations, you must leave the domain, as described in Section 3.5, "Removing a System from an Identity Domain" , and then join again, as described in the section called "Joining a Domain" . Note that joining requires the domain administrator's credentials. To change the configuration for a domain, edit the corresponding section in /etc/realmd.conf . The following example disables ID mapping for the ad.example.com domain, sets the host principal, and adds the system to the specified subtree: Note that the same configuration can also be set when originally joining the system to the domain using the realm join command, described in the section called "Joining a Domain" : Table 3.2, "Realm Configuration Options" lists the most notable options that can be set in the domain default section in /etc/realmd.conf . For complete information about the available configuration options, see the realmd.conf (5) man page. Table 3.2. Realm Configuration Options Option Description computer-ou Sets the directory location for adding computer accounts to the domain. This can be the full DN or an RDN, relative to the root entry. The subtree must already exist. user-principal Sets the userPrincipalName attribute value of the computer account to the provided Kerberos principal. automatic-id-mapping Sets whether to enable dynamic ID mapping or disable the mapping and use POSIX attributes configured in Active Directory.
|
[
"[ad.example.com] attribute = value attribute = value",
"[ad.example.com] computer-ou = ou=Linux Computers,DC=domain,DC=example,DC=com user-principal = host/[email protected] automatic-id-mapping = no",
"realm join --computer-ou= \"ou=Linux Computers,dc=domain,dc=com\" --automatic-id-mapping= no --user-principal= host/[email protected]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/realmd-conf
|
Chapter 6. Using .NET 8.0 on OpenShift Container Platform
|
Chapter 6. Using .NET 8.0 on OpenShift Container Platform 6.1. Overview NET images are added to OpenShift by importing imagestream definitions from s2i-dotnetcore . The imagestream definitions include the dotnet imagestream which contains sdk images for different supported versions of .NET. Life Cycle and Support Policies for the .NET Program provides an up-to-date overview of supported versions. Version Tag Alias .NET 6.0 dotnet:6.0-ubi8 dotnet:6.0 .NET 7.0 dotnet:7.0-ubi8 dotnet:7.0 .NET 8.0 dotnet:8.0-ubi8 dotnet:8.0 The sdk images have corresponding runtime images which are defined under the dotnet-runtime imagestream. The container images work across different versions of Red Hat Enterprise Linux and OpenShift. The UBI-8 based images (suffix -ubi8) are hosted on the registry.access.redhat.com and do not require authentication. 6.2. Installing .NET image streams To install .NET image streams, use image stream definitions from s2i-dotnetcore with the OpenShift Client ( oc ) binary. Image streams can be installed from Linux, Mac, and Windows. You can define .NET image streams in the global openshift namespace or locally in a project namespace. Sufficient permissions are required to update the openshift namespace definitions. Procedure Install (or update) the image streams: 6.3. Deploying applications from source using oc The following example demonstrates how to deploy the example-app application using oc , which is in the app folder on the dotnet-8.0 branch of the redhat-developer/s2i-dotnetcore-ex GitHub repository: Procedure Create a new OpenShift project: Add the ASP.NET Core application: Track the progress of the build: View the deployed application once the build is finished: The application is now accessible within the project. Optional : Make the project accessible externally: Obtain the shareable URL: 6.4. Deploying applications from binary artifacts using oc You can use .NET Source-to-Image (S2I) builder image to build applications using binary artifacts that you provide. Prerequisites Published application. For more information, see Procedure Create a new binary build: Start the build and specify the path to the binary artifacts on your local machine: Create a new application: 6.5. Environment variables for .NET 8.0 The .NET images support several environment variables to control the build behavior of your .NET application. You can set these variables as part of the build configuration, or add them to the .s2i/environment file in the application source code repository. Variable Name Description Default DOTNET_STARTUP_PROJECT Selects the project to run. This must be a project file (for example, csproj or fsproj ) or a folder containing a single project file. . DOTNET_ASSEMBLY_NAME Selects the assembly to run. This must not include the .dll extension. Set this to the output assembly name specified in csproj (PropertyGroup/AssemblyName). The name of the csproj file DOTNET_PUBLISH_READYTORUN When set to true , the application will be compiled ahead of time. This reduces startup time by reducing the amount of work the JIT needs to perform when the application is loading. false DOTNET_RESTORE_SOURCES Specifies the space-separated list of NuGet package sources used during the restore operation. This overrides all of the sources specified in the NuGet.config file. This variable cannot be combined with DOTNET_RESTORE_CONFIGFILE . DOTNET_RESTORE_CONFIGFILE Specifies a NuGet.Config file to be used for restore operations. This variable cannot be combined with DOTNET_RESTORE_SOURCES . DOTNET_TOOLS Specifies a list of .NET tools to install before building the app. It is possible to install a specific version by post pending the package name with @<version> . DOTNET_NPM_TOOLS Specifies a list of NPM packages to install before building the application. DOTNET_TEST_PROJECTS Specifies the list of test projects to test. This must be project files or folders containing a single project file. dotnet test is invoked for each item. DOTNET_CONFIGURATION Runs the application in Debug or Release mode. This value should be either Release or Debug . Release DOTNET_VERBOSITY Specifies the verbosity of the dotnet build commands. When set, the environment variables are printed at the start of the build. This variable can be set to one of the msbuild verbosity values ( q[uiet] , m[inimal] , n[ormal] , d[etailed] , and diag[nostic] ). HTTP_PROXY, HTTPS_PROXY Configures the HTTP or HTTPS proxy used when building and running the application, respectively. DOTNET_RM_SRC When set to true , the source code will not be included in the image. DOTNET_SSL_DIRS Deprecated : Use SSL_CERT_DIR instead SSL_CERT_DIR Specifies a list of folders or files with additional SSL certificates to trust. The certificates are trusted by each process that runs during the build and all processes that run in the image after the build (including the application that was built). The items can be absolute paths (starting with / ) or paths in the source repository (for example, certificates). NPM_MIRROR Uses a custom NPM registry mirror to download packages during the build process. ASPNETCORE_URLS This variable is set to http://*:8080 to configure ASP.NET Core to use the port exposed by the image. Changing this is not recommended. http://*:8080 DOTNET_RESTORE_DISABLE_PARALLEL When set to true , disables restoring multiple projects in parallel. This reduces restore timeout errors when the build container is running with low CPU limits. false DOTNET_INCREMENTAL When set to true , the NuGet packages will be kept so they can be re-used for an incremental build. false DOTNET_PACK When set to true , creates a tar.gz file at /opt/app-root/app.tar.gz that contains the published application. 6.6. Creating the MVC sample application s2i-dotnetcore-ex is the default Model, View, Controller (MVC) template application for .NET. This application is used as the example application by the .NET S2I image and can be created directly from the OpenShift UI using the Try Example link. The application can also be created with the OpenShift client binary ( oc ). Procedure To create the sample application using oc : Add the .NET application: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub 6.7. Creating the CRUD sample application s2i-dotnetcore-persistent-ex is a simple Create, Read, Update, Delete (CRUD) .NET web application that stores data in a PostgreSQL database. Procedure To create the sample application using oc : Add the database: Add the .NET application: Add environment variables from the postgresql secret and database service name environment variable: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub
|
[
"oc apply [-n namespace ] -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/main/dotnet_imagestreams.json",
"oc new-project sample-project",
"oc new-app --name= example-app 'dotnet:8.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnet-8.0' --build-env DOTNET_STARTUP_PROJECT=app",
"oc logs -f bc/ example-app",
"oc logs -f dc/ example-app",
"oc expose svc/ example-app",
"oc get routes",
"oc new-build --name= my-web-app dotnet:8.0-ubi8 --binary=true",
"oc start-build my-web-app --from-dir= bin/Release/net8.0/publish",
"oc new-app my-web-app",
"oc new-app dotnet:8.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnet-8.0 --context-dir=app",
"oc expose service s2i-dotnetcore-ex",
"oc get route s2i-dotnetcore-ex",
"oc new-app postgresql-ephemeral",
"oc new-app dotnet:8.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex#dotnet-8.0 --context-dir app",
"oc set env dc/s2i-dotnetcore-persistent-ex --from=secret/postgresql -e database-service=postgresql",
"oc expose service s2i-dotnetcore-persistent-ex",
"oc get route s2i-dotnetcore-persistent-ex"
] |
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/using_net_8_0_on_openshift_container_platform
|
4.8. Egenera BladeFrame
|
4.8. Egenera BladeFrame Table 4.9, "Egenera BladeFrame" lists the fence device parameters used by fence_egenera , the fence agent for the Egenera BladeFrame. Table 4.9. Egenera BladeFrame luci Field cluster.conf Attribute Description Name name A name for the Egenera BladeFrame device connected to the cluster. CServer cserver The host name (and optionally the user name in the form of username@hostname ) assigned to the device. Refer to the fence_egenera (8) man page for more information. ESH Path (optional) esh The path to the esh command on the cserver (default is /opt/panmgr/bin/esh) Username user The login name. The default value is root . lpan lpan The logical process area network (LPAN) of the device. pserver pserver The processing blade (pserver) name of the device. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. Figure 4.8, "Egenera BladeFrame" shows the configuration screen for adding an Egenera BladeFrame fence device. Figure 4.8. Egenera BladeFrame The following command creates a fence device instance for an Egenera BladeFrame device: The following is the cluster.conf entry for the fence_egenera device:
|
[
"ccs -f cluster.conf --addfencedev egeneratest agent=fence_egenera user=root cserver=cservertest",
"<fencedevices> <fencedevice agent=\"fence_egenera\" cserver=\"cservertest\" name=\"egeneratest\" user=\"root\"/> </fencedevices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-egen-CA
|
Chapter 2. Understand MicroProfile
|
Chapter 2. Understand MicroProfile 2.1. MicroProfile Config 2.1.1. MicroProfile Config in JBoss EAP Configuration data can change dynamically and applications need to be able to access the latest configuration information without restarting the server. MicroProfile Config provides portable externalization of configuration data. This means, you can configure applications and microservices to run in multiple environments without modification or repackaging. MicroProfile Config functionality is implemented in JBoss EAP using the SmallRye Config component and is provided by the microprofile-config-smallrye subsystem. Note MicroProfile Config is only supported in JBoss EAP XP. It is not supported in JBoss EAP. Important If you are adding your own Config implementations, you need to use the methods in the latest version of the Config interface. Additional Resources MicroProfile Config SmallRye Config Config implementations 2.1.2. MicroProfile Config sources supported in MicroProfile Config MicroProfile Config configuration properties can come from different locations and can be in different formats. These properties are provided by ConfigSources. ConfigSources are implementations of the org.eclipse.microprofile.config.spi.ConfigSource interface. The MicroProfile Config specification provides the following default ConfigSource implementations for retrieving configuration values: System.getProperties() . System.getenv() . All META-INF/microprofile-config.properties files on the class path. The microprofile-config-smallrye subsystem supports additional types of ConfigSource resources for retrieving configuration values. You can also retrieve the configuration values from the following resources: Properties in a microprofile-config-smallrye/config-source management resource Files in a directory ConfigSource class ConfigSourceProvider class Additional Resources org.jboss.resteasy.microprofile.config.BaseServletConfigSource 2.2. MicroProfile Fault Tolerance 2.2.1. About MicroProfile Fault Tolerance specification The MicroProfile Fault Tolerance specification defines strategies to deal with errors inherent in distributed microservices. The MicroProfile Fault Tolerance specification defines the following strategies to handle errors: Timeout Define the amount of time within which an execution must finish. Defining a timeout prevents waiting for an execution indefinitely. Retry Define the criteria for retrying a failed execution. Fallback Provide an alternative in the case of a failed execution. CircuitBreaker Define the number of failed execution attempts before temporarily stopping. You can define the length of the delay before resuming execution. Bulkhead Isolate failures in part of the system so that the rest of the system can still function. Asynchronous Execute client request in a separate thread. Additional Resources MicroProfile Fault Tolerance specification 2.2.2. MicroProfile Fault Tolerance in JBoss EAP The microprofile-fault-tolerance-smallrye subsystem provides support for MicroProfile Fault Tolerance in JBoss EAP. The subsystem is available only in the JBoss EAP XP stream. The microprofile-fault-tolerance-smallrye subsystem provides the following annotations for interceptor bindings: @Timeout @Retry @Fallback @CircuitBreaker @Bulkhead @Asynchronous You can bind these annotations at the class level or at the method level. An annotation bound to a class applies to all of the business methods of that class. The following rules apply to binding interceptors: If a component class declares or inherits a class-level interceptor binding, the following restrictions apply: The class must not be declared final. The class must not contain any static, private, or final methods. If a non-static, non-private method of a component class declares a method level interceptor binding, neither the method nor the component class may be declared final. Fault tolerance operations have the following restrictions: Fault tolerance interceptor bindings must be applied to a bean class or bean class method. When invoked, the invocation must be the business method invocation as defined in the Jakarta Contexts and Dependency Injection specification. An operation is not considered fault tolerant if both of the following conditions are true: The method itself is not bound to any fault tolerance interceptor. The class containing the method is not bound to any fault tolerance interceptor. The microprofile-fault-tolerance-smallrye subsystem provides the following configuration options, in addition to the configuration options provided by MicroProfile Fault Tolerance: io.smallrye.faulttolerance.mainThreadPoolSize io.smallrye.faulttolerance.mainThreadPoolQueueSize Additional Resources MicroProfile Fault Tolerance Specification SmallRye Fault Tolerance project 2.3. MicroProfile Health 2.3.1. MicroProfile Health in JBoss EAP JBoss EAP includes the SmallRye Health component, which you can use to determine whether the JBoss EAP instance is responding as expected. This capability is enabled by default. MicroProfile Health is only available when running JBoss EAP as a standalone server. The MicroProfile Health specification defines the following health checks: Readiness Determines whether an application is ready to process requests. The annotation @Readiness provides this health check. Liveness Determines whether an application is running. The annotation @Liveness provides this health check. Startup Determines whether an application has already started. The annotation @Startup provides this health check. The @Health annotation was removed in MicroProfile Health 3.0. MicroProfile Health 3.1 includes a new Startup health check probe. For more information about the changes in MicroProfile Health 3.1, see Release Notes for MicroProfile Health 3.1 . Important The :empty-readiness-checks-status , :empty-liveness-checks-status , and :empty-startup-checks-status management attributes specify the global status when no readiness , liveness , or startup probes are defined. Additional Resources Global status when probes are not defined SmallRye Health MicroProfile Health Custom health check example 2.4. MicroProfile JWT 2.4.1. MicroProfile JWT integration in JBoss EAP The subsystem microprofile-jwt-smallrye provides MicroProfile JWT integration in JBoss EAP. The following functionalities are provided by the microprofile-jwt-smallrye subsystem: Detecting deployments that use MicroProfile JWT security. Activating support for MicroProfile JWT. The subsystem contains no configurable attributes or resources. In addition to the microprofile-jwt-smallrye subsystem, the org.eclipse.microprofile.jwt.auth.api module provides MicroProfile JWT integration in JBoss EAP. Additional Resources SmallRye JWT 2.4.2. Differences between a traditional deployment and an MicroProfile JWT deployment MicroProfile JWT deployments do not depend on managed SecurityDomain resources like traditional JBoss EAP deployments. Instead, a virtual SecurityDomain is created and used across the MicroProfile JWT deployment. As the MicroProfile JWT deployment is configured entirely within the MicroProfile Config properties and the microprofile-jwt-smallrye subsystem, the virtual SecurityDomain does not need any other managed configuration for the deployment. 2.4.3. MicroProfile JWT activation in JBoss EAP MicroProfile JWT is activated for applications based on the presence of an auth-method in the application. The MicroProfile JWT integration is activated for an application in the following way: As part of the deployment process, JBoss EAP scans the application archive for the presence of an auth-method . If an auth-method is present and defined as MP-JWT , the MicroProfile JWT integration is activated. The auth-method can be specified in either or both of the following files: the file containing the class that extends javax.ws.rs.core.Application , annotated with the @LoginConfig the web.xml configuration file If auth-method is defined both in a class, using annotation, and in the web.xml configuration file, the definition in web.xml configuration file is used. 2.4.4. Limitations of MicroProfile JWT in JBoss EAP The MicroProfile JWT implementation in JBoss EAP has certain limitations. The following limitations of MicroProfile JWT implementation exist in JBoss EAP: The MicroProfile JWT implementation parses only the first key from the JSON Web Key Set (JWKS) supplied in the mp.jwt.verify.publickey property. Therefore, if a token claims to be signed by the second key or any key after the second key, the token fails verification and the request containing the token is not authorized. Base64 encoding of JWKS is not supported. In both cases, a clear text JWKS can be referenced instead of using the mp.jwt.verify.publickey.location config property. 2.5. MicroProfile Metrics 2.5.1. MicroProfile Metrics in JBoss EAP JBoss EAP includes the SmallRye Metrics component. The SmallRye Metrics component provides the MicroProfile Metrics functionality using the microprofile-metrics-smallrye subsystem. The microprofile-metrics-smallrye subsystem provides monitoring data for the JBoss EAP instance. The subsystem is enabled by default. Important The microprofile-metrics-smallrye subsystem is only enabled in standalone configurations. Additional Resources SmallRye Metrics MicroProfile Metrics 2.6. MicroProfile OpenAPI 2.6.1. MicroProfile OpenAPI in JBoss EAP MicroProfile OpenAPI is integrated in JBoss EAP using the microprofile-openapi-smallrye subsystem. The MicroProfile OpenAPI specification defines an HTTP endpoint that serves an OpenAPI 3.0 document. The OpenAPI 3.0 document describes the REST services for the host. The OpenAPI endpoint is registered using the configured path, for example http://localhost:8080/openapi, local to the root of the host associated with a deployment. Note Currently, the OpenAPI endpoint for a virtual host can only document a single deployment. To use OpenAPI with multiple deployments registered with different context paths on the same virtual host, each deployment must use a distinct endpoint path. The OpenAPI endpoint returns a YAML document by default. You can also request a JSON document using an Accept HTTP header, or a format query parameter. If the Undertow server or host of a given application defines an HTTPS listener then the OpenAPI document is also available using HTTPS. For example, an endpoint for HTTPS is https://localhost:8443/openapi. 2.7. MicroProfile OpenTracing 2.7.1. MicroProfile OpenTracing The ability to trace requests across service boundaries is important, especially in a microservices environment where a request can flow through multiple services during its life cycle. The MicroProfile OpenTracing specification defines behaviors and an API for accessing an OpenTracing compliant Tracer interface within a CDI-bean application. The Tracer interface automatically traces JAX-RS applications. The behaviors specify how OpenTracing Spans are created automatically for incoming and outgoing requests. The API defines how to explicitly disable or enable tracing for given endpoints. Additional Resources For more information about MicroProfile OpenTracing specification, see MicroProfile OpenTracing documentation. For more information about the Tracer interface, see Tracer javadoc . 2.7.2. MicroProfile OpenTracing in JBoss EAP You can use the microprofile-opentracing-smallrye subsystem to configure the distributed tracing of Jakarta EE applications. This subsystem uses the SmallRye OpenTracing component to provide the MicroProfile OpenTracing functionality for JBoss EAP. MicroProfile OpenTracing 2.0 supports tracing requests for applications. You can configure the default Jaeger Java Client tracer, plus a set of instrumentation libraries for components commonly used in Jakarta EE, using JBoss EAP management API with the management CLI or the management console. Note Each individual WAR deployed to the JBoss EAP server automatically has its own Tracer instance. Each WAR within an EAR is treated as an individual WAR, and each has its own Tracer instance. By default, the service name used with the Jaeger Client is derived from the deployment's name, which is usually the WAR file name. Within the microprofile-opentracing-smallrye subsystem, you can configure the Jaeger Java Client by setting system properties or environment variables. Important Configuring the Jaeger Client tracer using system properties and environment variables is provided as a Technology Preview. The system properties and environment variables affiliated with the Jaeger Client tracer might change and become incompatible with each other in future releases. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note By default, the probabilistic sampling strategy of the Jaeger Client for Java is set to 0.001 , meaning that only approximately one in one thousand traces are sampled. To sample every request, set the system properties JAEGER_SAMPLER_TYPE to const and JAEGER_SAMPLER_PARAM to 1 . Additional Resources For more information about SmallRye OpenTracing functionality, see the SmallRye OpenTracing component. For more information about the default tracer, see the Jaeger Java Client. For more information about the Tracer interface, see Tracer javadoc . For more information about overriding the default tracer and tracing Jakarta Contexts and Dependency Injection beans, see Using Eclipse MicroProfile OpenTracing to Trace Requests in the Development Guide . For more information about configuring the Jaeger Client, see the Jaeger documentation. For more information about valid system properties, see Configuration via Environment in the Jaeger documentation. 2.8. MicroProfile REST Client 2.8.1. MicroProfile REST client JBoss EAP XP 4.0.0 supports the MicroProfile REST client 2.0 that builds on Jakarta RESTful Web Services 2.1.6 client APIs to provide a type-safe approach to invoke RESTful services over HTTP. The MicroProfile Type Safe REST clients are defined as Java interfaces. With the MicroProfile REST clients, you can write client applications with executable code. Use the MicroProfile REST client to avail the following capabilities: An intuitive syntax Programmatic registration of providers Declarative registration of providers Declarative specification of headers ResponseExceptionMapper Jakarta Contexts and Dependency Injection integration Access to server-sent events (SSE) Additional resources A comparison between MicroProfile REST client and Jakarta RESTful Web Services syntaxes Programmatic registration of providers in MicroProfile REST client Declarative registration of providers in MicroProfile REST client Declarative specification of headers in MicroProfile REST client ResponseExceptionMapper in MicroProfile REST client Context dependency injection with MicroProfile REST client 2.8.2. The resteasy.original.webapplicationexception.behavior MicroProfile Config property MicroProfile Config is the name of a specification that developers can use to configure applications and microservices to run in multiple environments without having to modify or repackage those apps. Previously, MicroProfile Config was available for JBoss EAP as a technology preview, but it has since been removed. MicroProfile Config is now available only on JBoss EAP XP. Defining the resteasy.original.webapplicationexception.behavior MicroProfile Config property You can set the resteasy.original.webapplicationexception.behavior parameter as either a web.xml servlet property or a system property. Here's an example of one such servlet property in web.xml : <context-param> <param-name>resteasy.original.webapplicationexception.behavior</param-name> <param-value>true</param-value> </context-param> You can also use MicroProfile Config to configure any other RESTEasy property. Additional resources For more information about MicroProfile Config on JBoss EAP XP, see Understand MicroProfile . For more information about the MicroProfile REST Client, see MicroProfile REST Client . For more information about RESTEasy, see Jakarta RESTful Web Services Request Processing . 2.9. MicroProfile Reactive Messaging 2.9.1. MicroProfile reactive messaging When you upgrade to JBoss EAP XP 4.0.0, you can enable the newest version of MicroProfile Reactive Messaging, which includes reactive messaging extensions and subsystems. A "reactive stream" is a succession of event data, along with processing protocols and standards, that is pushed across an asynchronous boundary (like a scheduler) without any buffering. An "event" might be a scheduled and repeating temperature check in a weather app, for example. The primary benefit of reactive streams is the seamless interoperability of your various applications and implementations. Reactive messaging provides a framework for building event-driven, data-streaming, and event-sourcing applications. Reactive messaging results in the constant and smooth exchange of event data, the reactive stream, from one app to another. You can use MicroProfile Reactive Messaging for asynchronous messaging through reactive streams so that your application can interact with others, like Apache Kafka, for example. After you upgrade your instance of MicroProfile Reactive Messaging to the latest version, you can do the following: Provision a server with MicroProfile Reactive Messaging for the Apache Kafka data-streaming platform. Interact with reactive messaging in-memory and backed by Apache Kafka topics through the latest reactive messaging APIs. Use MicroProfile Metrics to find out how many messages are streamed on a given channel. Additional resources For more information about Apache Kafka, see What is Apache Kafka? 2.9.2. MicroProfile reactive messaging connectors You can use connectors to integrate MicroProfile Reactive Messaging with a number of external messaging systems. MicroProfile for JBoss EAP comes with the Apache Kafka connector. Use the Eclipse MicroProfile Config specification to configure your connectors. The Apache Kafka connector and incorporated layers MicroProfile Reactive Messaging includes the Kafka connector, which you can configure with MicroProfile Config. The Kafka connector incorporates microprofile-reactive-messaging-kafka and microprofile-reactive-messaging Galleon layers. The microprofile-reactive-messaging layer provides the core MicroProfile Reactive Messaging functionality. Table 2.1. Reactive messaging and Apache Kafka connector Galleon layers Layer Definition microprofile-reactive-streams-operators Provides MicroProfile Reactive Streams Operators APIs and supporting implementing modules. Contains MicroProfile Reactive Streams Operators with SmallRye extension and subsystem. Depends on cdi layer. cdi stands for Jakarta Contexts and Dependency Injection; provides subsystems that add @Inject functionality. microprofile-reactive-messaging Provides MicroProfile Reactive Messaging APIs and supporting implementing modules. Contains MicroProfile with SmallRye extension and subsystem. Depends on microprofile-config and microprofile-reactive-streams-operators layers. microprofile-reactive-messaging-kafka Provides Kafka connector modules that enable MicroProfile Reactive Messaging to interact with Kafka. Depends on microprofile-reactive-messaging layer. 2.9.3. The Apache Kafka event streaming platform Apache Kafka is an open source distributed event (data) streaming platform that can publish, subscribe to, store, and process streams of records in real time. It handles event streams from multiple sources and delivers them to multiple consumers, moving large amounts of data from points A to Z and everywhere else, all at the same time. MicroProfile Reactive Messaging uses Apache Kafka to deliver these event records in as few as two microseconds, store them safely in distributed, fault-tolerant clusters, all while making them available across any team-defined zones or geographic regions. Additional resources What is Apache Kafka? Red Hat OpenShift Streams for Apache Kafka Red Hat AMQ
|
[
"<context-param> <param-name>resteasy.original.webapplicationexception.behavior</param-name> <param-value>true</param-value> </context-param>"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/understand_microprofile
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.