title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 15. Manually importing a hosted cluster | Chapter 15. Manually importing a hosted cluster Hosted clusters are automatically imported into multicluster engine Operator after the hosted control plane becomes available. 15.1. Limitations of managing imported hosted clusters Hosted clusters are automatically imported into the local multicluster engine for Kubernetes Operator, unlike a standalone OpenShift Container Platform or third party clusters. Hosted clusters run some of their agents in the hosted mode so that the agents do not use the resources of your cluster. If you choose to automatically import hosted clusters, you can update node pools and the control plane in hosted clusters by using the HostedCluster resource on the management cluster. To update node pools and a control plane, see "Updating node pools in a hosted cluster" and "Updating a control plane in a hosted cluster". You can import hosted clusters into a location other than the local multicluster engine Operator by using the Red Hat Advanced Cluster Management (RHACM). For more information, see "Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management". In this topology, you must update your hosted clusters by using the command-line interface or the console of the local multicluster engine for Kubernetes Operator where the cluster is hosted. You cannot update the hosted clusters through the RHACM hub cluster. 15.2. Additional resources Updating node pools in a hosted cluster Updating a control plane in a hosted cluster Discovering multicluster engine for Kubernetes Operator hosted clusters in Red Hat Advanced Cluster Management 15.3. Manually importing hosted clusters If you want to import hosted clusters manually, complete the following steps. Procedure In the console, click Infrastructure Clusters and select the hosted cluster that you want to import. Click Import hosted cluster . Note For your discovered hosted cluster, you can also import from the console, but the cluster must be in an upgradable state. Import on your cluster is disabled if the hosted cluster is not in an upgradable state because the hosted control plane is not available. Click Import to begin the process. The status is Importing while the cluster receives updates and then changes to Ready . 15.4. Manually importing a hosted cluster on AWS You can also import a hosted cluster on Amazon Web Services (AWS) with the command-line interface. Procedure Create your ManagedCluster resource by using the following sample YAML file: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: import.open-cluster-management.io/hosting-cluster-name: local-cluster import.open-cluster-management.io/klusterlet-deploy-mode: Hosted open-cluster-management/created-via: hypershift labels: cloud: auto-detect cluster.open-cluster-management.io/clusterset: default name: <hosted_cluster_name> 1 vendor: OpenShift name: <hosted_cluster_name> spec: hubAcceptsClient: true leaseDurationSeconds: 60 1 Replace <hosted_cluster_name> with the name of your hosted cluster. Run the following command to apply the resource: USD oc apply -f <file_name> 1 1 Replace <file_name> with the YAML file name you created in the step. If you have Red Hat Advanced Cluster Management installed, create your KlusterletAddonConfig resource by using the following sample YAML file. If you have installed multicluster engine Operator only, skip this step: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: clusterName: <hosted_cluster_name> clusterNamespace: <hosted_cluster_namespace> clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: true certPolicyController: enabled: true iamPolicyController: enabled: true policyController: enabled: true searchCollector: enabled: false 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. Run the following command to apply the resource: USD oc apply -f <file_name> 1 1 Replace <file_name> with the YAML file name you created in the step. After the import process is complete, your hosted cluster becomes visible in the console. You can also check the status of your hosted cluster by running the following command: USD oc get managedcluster <hosted_cluster_name> 15.5. Disabling the automatic import of hosted clusters into multicluster engine Operator Hosted clusters are automatically imported into multicluster engine Operator after the control plane becomes available. If needed, you can disable the automatic import of hosted clusters. Any hosted clusters that were previously imported are not affected, even if you disable automatic import. When you upgrade to multicluster engine Operator 2.5 and automatic import is enabled, all hosted clusters that are not imported are automatically imported if their control planes are available. Note If Red Hat Advanced Cluster Management is installed, all Red Hat Advanced Cluster Management add-ons are also enabled. When automatic import is disabled, only newly created hosted clusters are not automatically imported. Hosted clusters that were already imported are not affected. You can still manually import clusters by using the console or by creating the ManagedCluster and KlusterletAddonConfig custom resources. Procedure To disable the automatic import of hosted clusters, complete the following steps: On the hub cluster, open the hypershift-addon-deploy-config specification that is in the AddonDeploymentConfig resource in the namespace where multicluster engine Operator is installed by entering the following command: USD oc edit addondeploymentconfig hypershift-addon-deploy-config \ -n multicluster-engine In the spec.customizedVariables section, add the autoImportDisabled variable with value of "true" , as shown in the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: customizedVariables: - name: hcMaxNumber value: "80" - name: hcThresholdNumber value: "60" - name: autoImportDisabled value: "true" To re-enable automatic import, set the value of the autoImportDisabled variable to "false" or remove the variable from the AddonDeploymentConfig resource. | [
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: import.open-cluster-management.io/hosting-cluster-name: local-cluster import.open-cluster-management.io/klusterlet-deploy-mode: Hosted open-cluster-management/created-via: hypershift labels: cloud: auto-detect cluster.open-cluster-management.io/clusterset: default name: <hosted_cluster_name> 1 vendor: OpenShift name: <hosted_cluster_name> spec: hubAcceptsClient: true leaseDurationSeconds: 60",
"oc apply -f <file_name> 1",
"apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: clusterName: <hosted_cluster_name> clusterNamespace: <hosted_cluster_namespace> clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: true certPolicyController: enabled: true iamPolicyController: enabled: true policyController: enabled: true searchCollector: enabled: false",
"oc apply -f <file_name> 1",
"oc get managedcluster <hosted_cluster_name>",
"oc edit addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: customizedVariables: - name: hcMaxNumber value: \"80\" - name: hcThresholdNumber value: \"60\" - name: autoImportDisabled value: \"true\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hosted_control_planes/manually-importing-a-hosted-cluster |
Chapter 4. Ceph on-wire encryption | Chapter 4. Ceph on-wire encryption You can enable encryption for all Ceph traffic over the network with the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. Note Ceph daemons or clients using messenger protocol v1 or v2 can implement a throttle, that is, a mechanism for limiting message queue growth. In rare cases, a daemon or client can exceed its throttle, which causes possible delays in message processing. Once throttle limit is hit, you get the following low-level warning message: The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ensure that you consider cluster CPU requirements when you plan the Red Hat Ceph Storage cluster, to include encryption overhead. Important Using secure mode is currently supported by Ceph kernel clients, such as CephFS and krbd on Red Hat Enterprise Linux. Using secure mode is supported by Ceph clients using librbd , such as OpenStack Nova, Glance, and Cinder. Address Changes For both versions of the messenger protocol to coexist in the same storage cluster, the address formatting has changed: Old address format was: IP_ADDR : PORT / CLIENT_ID , for example, 1.2.3.4:5678/91011 . New address format is, PROTOCOL_VERSION : IP_ADDR : PORT / CLIENT_ID , for example, v2:1.2.3.4:5678/91011 , where PROTOCOL_VERSION can be either v1 or v2 . Because the Ceph daemons now bind to multiple ports, the daemons display multiple addresses instead of a single address. Here is an example from a dump of the monitor map: Also, the mon_host configuration option and specifying addresses on the command line, using -m , supports the new address format. Connection Phases There are four phases for making an encrypted connection: Banner On connection, both the client and the server send a banner. Currently, the Ceph banner is ceph 0 0n . Authentication Exchange All data, sent or received, is contained in a frame for the duration of the connection. The server decides if authentication has completed, and what the connection mode will be. The frame format is fixed, and can be in three different forms depending on the authentication flags being used. Message Flow Handshake Exchange The peers identify each other and establish a session. The client sends the first message, and the server will reply with the same message. The server can close connections if the client talks to the wrong daemon. For new sessions, the client and server proceed to exchanging messages. Client cookies are used to identify a session, and can reconnect to an existing session. Message Exchange The client and server start exchanging messages, until the connection is closed. Additional Resources See the Red Hat Ceph Storage Data Security and Hardening Guide for details on enabling the msgr2 protocol. | [
"Throttler Limit has been hit. Some message processing may be significantly delayed.",
"epoch 1 fsid 50fcf227-be32-4bcb-8b41-34ca8370bd17 last_changed 2021-12-12 11:10:46.700821 created 2021-12-12 11:10:46.700821 min_mon_release 14 (nautilus) 0: [v2:10.0.0.10:3300/0,v1:10.0.0.10:6789/0] mon.a 1: [v2:10.0.0.11:3300/0,v1:10.0.0.11:6789/0] mon.b 2: [v2:10.0.0.12:3300/0,v1:10.0.0.12:6789/0] mon.c"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/architecture_guide/ceph-on-wire-encryption_arch |
Chapter 1. Migrating applications to Red Hat build of Quarkus 3.2 | Chapter 1. Migrating applications to Red Hat build of Quarkus 3.2 As an application developer, you can migrate applications that are based on earlier versions of Red Hat build of Quarkus to version 3.2 by using the Quarkus CLI's update command . Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. 1.1. Updating projects to the latest Red Hat build of Quarkus version You can update or upgrade your Red Hat build of Quarkus projects to the latest version by using an update command. The update command primarily employs OpenRewrite recipes to automate updates for most project dependencies, source code, and documentation. Although these recipes perform many migration tasks, they do not cover all the tasks detailed in the migration guide. Post-update, if expected updates are missing, consider the following reasons: The recipe applied by the update command might not include a migration task that your project requires. Your project might use an extension that is incompatible with the latest Red Hat build of Quarkus version. Important For projects that use Hibernate ORM or Hibernate Reactive, review the Hibernate ORM 5 to 6 migration quick reference. The following update command covers only a subset of this guide. 1.1.1. Prerequisites Roughly 30 minutes An IDE JDK 11+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later Optionally, the Red Hat build of Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) A project based on Red Hat build of Quarkus version 2.13 or later. 1.1.2. Procedure Create a working branch for your project by using your version control system. To use the Red Hat build of Quarkus CLI in the step, install the latest version of the Red Hat build of Quarkus CLI . Confirm the version number using quarkus -v . Configure your extension registry client as described in the Configuring Red Hat build of Quarkus extension registry client section of the Quarkus "Getting Started" guide. To update using the Red Hat build of Quarkus CLI, go to the project directory and update the project to the latest stream: quarkus update Optional: By default, this command updates to the latest current version. To update to a specific stream instead of latest current version, add the stream option to this command followed by the version; for example: --stream=3.2 To update using Maven instead of the Red Hat build of Quarkus CLI, go to the project directory and update the project to the latest stream: Ensure that the Red Hat build of Quarkus Maven plugin version aligns with the latest supported Red Hat build of Quarkus version. Configure your project according to the guidelines provided in the Getting started with Quarkus guide. mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:update Optional: By default, this command updates to the latest current version. To update to a specific stream instead of latest current version, add the stream option to this command followed by the version; for example: -Dstream=3.2 Analyze the update command output for potential instructions and perform the suggested tasks if necessary. Use a diff tool to inspect all changes. Review the migration guide for items that were not updated by the update command. If your project has such items, implement the additional steps advised in these topics. Ensure the project builds without errors, all tests pass, and the application functions as required before deploying to production. Before deploying your updated Red Hat build of Quarkus application to production, ensure the following: The project builds without errors. All tests pass. The application functions as required. 1.2. Changes that affect compatibility with earlier versions This section describes changes in Red Hat build of Quarkus 3.2 that affect the compatibility of applications built with earlier product versions. Review these breaking changes and take the steps required to ensure that your applications continue functioning after you update them to Red Hat build of Quarkus 3.2. To automate many of these changes, use the quarkus update command to update your projects to the latest Red Hat build of Quarkus version . 1.2.1. Cloud 1.2.1.1. Upgrade to the Kubernetes client that is included with Red Hat build of Quarkus The Kubernetes Client has been upgraded from 5.12 to 6.7.2. For more information, see the Kubernetes Client - Migration from 5.x to 6.x guide. 1.2.1.2. Improved logic for generating TLS-based container ports Red Hat build of Quarkus 3.2 introduces changes in how the Kubernetes extension generates TLS-based container ports. Earlier versions automatically added a container port named https to generated deployment resources. This approach posed problems, especially when SSL/TLS was not configured, rendering the port non-functional. In 3.2 and later, the Kubernetes extension does not add a container port named https by default. The container port is only added if you take the following steps: You specify any relevant quarkus.http.ssl.* properties in your application.properties file. You set quarkus.kubernetes.ports.https.tls=true in your application.properties file. 1.2.1.3. Removal of some Kubernetes and OpenShift properties With this 3.2 release, some previously deprecated Kubernetes and OpenShift-related properties have been removed. Replace them with their new counterparts. Table 1.1. Removed properties and their new counterparts Removed property New property quarkus.kubernetes.expose quarkus.kubernetes.ingress.expose quarkus.openshift.expose quarkus.openshift.route.expose quarkus.kubernetes.host quarkus.kubernetes.ingress.host quarkus.openshift.host quarkus.openshift.route.host quarkus.kubernetes.group quarkus.kubernetes.part-of quarkus.openshift.group quarkus.openshift.part-of Additionally, with this release, properties without the quarkus. prefix are ignored. For example, before this release, if you added a kubernetes.name property, it was mapped to quarkus.kubernetes.name . To avoid exceptions like java.lang.ClassCastException when upgrading from 2.16.0.Final to 2.16.1.Final #30850 , this kind of mapping is no longer done. As you continue your work with Kubernetes and OpenShift in the context of Quarkus, use the new properties and include the quarkus. prefix where needed. 1.2.2. Core 1.2.2.1. Upgrade to Jandex 3 With this 3.2 release, Jandex becomes part of the SmallRye project, consolidating all Jandex projects into a single repository: https://github.com/smallrye/jandex/ . Consequently, a new release of the Jandex Maven plugin is delivered alongside the Jandex core. This release also changes the Maven coordinates. Replace the old coordinates with the new ones. Table 1.2. Old coordinates and their new counterparts Old coordinates New coordinates org.jboss:jandex io.smallrye:jandex org.jboss.jandex:jandex-maven-plugin io.smallrye:jandex-maven-plugin If you use the Maven Enforcer plugin, configure it to ban any dependencies on org.jboss:jandex . An equivalent plugin is available for Gradle users. 1.2.2.2. Migration path for users of Jandex API Jandex 3 contains many interesting features and improvements. These changes, unfortunately, required a few breaking changes. Here is the recommended migration path: Upgrade to Jandex 2.4.3.Final. This version provides replacements for some methods that have changed in Jandex 3.0.0. For instance, instead of ClassInfo.annotations() , use annotationsMap() , and replace MethodInfo.parameters() with parameterTypes() . Stop using any methods that Jandex has marked as deprecated. Ensure you do not use the return value of Indexer.index() or indexClass() . If you compile your code against Jandex 2.4.3.Final, it can run against both 2.4.3.Final and 3.0.0. However, there are exceptions to this. If you implement the IndexView interface or, in some cases, rely on the UnresolvedTypeVariable class, it is not possible to keep the project compatible with both Jandex 2.4.3 and Jandex 3. Upgrade to Jandex 3.0.0. If you implement the IndexView interface, ensure you implement the methods that have been added. And if you extensively use the Jandex Type hierarchy, verify if you need to handle TypeVariableReference , which is now used to represent recursive type variables. Alongside this release, Jandex introduces a new documentation site . While it's a work in progress, it will become more comprehensive over time. You can also refer to the improved Jandex Javadoc for further information. 1.2.2.3. Removal of io.quarkus.arc.config.ConfigProperties annotation With this 3.2 release, the previously deprecated io.quarkus.arc.config.ConfigProperties annotation has been removed. Instead, use the io.smallrye.config.ConfigMapping annotation to inject multiple related configuration properties. For more information, see the @ConfigMapping section of the "Mapping configuration to objects" guide. 1.2.2.4. Interceptor binding annotations declared on private methods now generate build failures With this 3.2 release, declaring an interceptor binding annotation on a private method is not supported and triggers a build failure; for example: In earlier releases, declaring an interceptor binding annotation on a private method triggered only a warning in logs but was otherwise ignored. This support change aims to prevent unintentional usage of interceptor annotations on private methods because they do not have any effect and can cause confusion. To address this change, remove such annotations from private methods. If removing these annotations is not feasible, you can set the configuration property quarkus.arc.fail-on-intercepted-private-method to false . This setting reverts the system to its behavior, where only a warning is logged. 1.2.2.5. Removal of the @AlternativePriority annotation This release removes the previously deprecated @AlternativePriority annotation. Replace it with both the @Alternative and @Priority annotations. Example: Removed annotation @AlternativePriority(1) Example: Replacement annotations @Alternative @Priority(1) Use jakarta.annotation.Priority with the @Priority annotation instead of io.quarkus.arc.Priority , which is deprecated and planned for removal in a future release. Both annotations perform identical functions. 1.2.2.6. Testing changes: Fixation of the Mockito subclass mockmaker This release updates Mockito version 5.x. Notably, Mockito switched the default mockmaker to inline in its 5.0.0 release . However, to preserve the mocking behavior Quarkus users are familiar with since Quarkus 1.x, and to prevent memory leaks for extensive test suites , Quarkus 3.0 fixes the mockmaker to subclass instead of inline until the latter is fully supported. If you want to force the inline mockmaker, follow these steps: Add the following exclusion to your pom.xml : <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5-mockito</artifactId> <exclusions> <exclusion> <groupId>org.mockito</groupId> <artifactId>mockito-subclass</artifactId> </exclusion> </exclusions> <dependency> Add mockito-core to your dependencies. Mockito 5.3 removed the mockito-inline artifact: you can remove it from your dependencies. 1.2.2.7. Update to the minimum supported Maven version Quarkus has undergone a refactoring of its Maven plugins to support Maven 3.9. As a result, the minimum Maven version supported by Quarkus has been raised from 3.6.2 to 3.8.6 or later. Ensure your development environment is updated accordingly to benefit from the latest improvements and features. 1.2.2.8. Removal of quarkus-bootstrap-maven-plugin With this 3.2 release, the previously-deprecated io.quarkus:quarkus-bootstrap-maven-plugin Maven plugin has been removed. This plugin is for Quarkus extension development only. Therefore, if you are developing custom Quarkus extensions, you must change the artifact ID from io.quarkus:quarkus-bootstrap-maven-plugin to io.quarkus:quarkus-extension-maven-plugin . Note This change relates specifically to custom extension development. For standard application development, you use the quarkus-maven-plugin plugin. 1.2.2.9. Mutiny 2 moves to Java Flow Mutiny is a reactive programming library, the versions 1.x of which were based on the org.reactivestream interfaces, whereas version 2 is based on java.util.concurrent.Flow . These APIs are identical, but the package name has changed. Mutiny offers adapters to bridge between Mutiny 2 (Flow API) and other libraries with legacy reactive streams API. 1.2.3. Data 1.2.3.1. Removal of Hibernate ORM with Panache methods With this 3.2 release, the following previously deprecated methods from Hibernate ORM with Panache and Hibernate ORM with Panache in Kotlin have been removed: io.quarkus.hibernate.orm.panache.PanacheRepositoryBase#getEntityManager(Class<?> clazz) io.quarkus.hibernate.orm.panache.kotlin.PanacheRepositoryBase#getEntityManager(clazz: KClass<Any>) Instead, use the Panache.getEntityManager(Class<?> clazz) method. 1.2.3.2. Enhancement in Hibernate ORM: Automated IN clause parameter padding With this 3.2 release, the Hibernate Object-Relational Mapping (ORM) extension has been changed to incorporate automatic IN clause parameter padding as a default setting. This improvement augments the caching efficiency for queries that incorporate IN clauses. To revert to the functionality and deactivate this feature, you can set the property value of quarkus.hibernate-orm.query.in-clause-parameter-padding to false . 1.2.3.3. New dependency: Hibernate Reactive 2 and Hibernate ORM 6.2 With this 3.2 release, Quarkus depends on the Hibernate Reactive 2 extension instead of Hibernate Reactive 1. This change implies several changes in behavior and database schema expectations that are incompatible with earlier versions. Most of the changes are related to Hibernate Reactive 2 depending on Hibernate ORM 6.2 instead of Hibernate ORM 5.6. Important The Hibernate Reactive 2 extension is available as a Technology Preview in Red Hat build of Quarkus 3.2. For more information, see the following resources: Migration Guide 3.0: Hibernate Reactive Hibernate Reactive: 2.0 series Migration Guide 3.0: Hibernate ORM 5 to 6 migration 1.2.3.4. Hibernate Search changes Changes in the defaults for projectable and sortable on GeoPoint fields With this 3.2 release, Hibernate Search 6.2 changes how defaults are handled for GeoPoint fields. Suppose your Hibernate Search mapping includes GeoPoint fields that use the default value for the projectable option and either the default value or Sortable.NO for the sortable option. In that case, Elasticsearch schema validation fails on startup because of missing doc values on those fields. To prevent that failure, complete either of the following steps: Revert to the defaults by adding projectable = Projectable.NO to the mapping annotation of relevant GeoPoint fields. Recreate your Elasticsearch indexes and reindex your database. The easiest way to do so is to use the MassIndexer with dropAndCreateSchemaOnStart(true) . For more information, see the Data format and schema changes section of the "Hibernate Search 6.2.1.Final: Migration Guide from 6.1". Deprecated or renamed configuration properties With this 3.2 release, the quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy property is deprecated and is planned for removal in a future version. Use the quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy property instead. Also, the quarkus.hibernate-search-orm.automatic-indexing.enable-dirty-check property is deprecated and is planned for removal in a future version. There is no alternative to replace it. After the removal, it is planned that Search will always trigger reindexing after a transaction modifies an object's field. That is, if a transaction makes the fields "dirty." For more information, see the Configuration changes section of the "Hibernate Search 6.2.1.Final: Migration Guide from 6.1". 1.2.3.5. Hibernate Validator - Validation.buildDefaultValidatorFactory() now returns a ValidatorFactory managed by Quarkus With this 3.2 release, Quarkus doesn't support the manual creation of ValidatorFactory instances. Instead, you must use the Validation.buildDefaultValidatorFactory() method, which returns ValidatorFactory instances managed by Quarkus that you inject through Context and Dependency Injection (CDI). The main reason for this change is that a ValidatorFactory must be carefully crafted to work in native executables. Before this release, you could still manually create a ValidatorFactory instance and handle it yourself if you could make it work. This change aims to improve the compatibility with components creating their own ValidatorFactory . For more information, see the following resources: Hibernate Validator extension and CDI section of the "Validation with Hibernate Validator" guide. ValidatorFactory and native executables section of the "Validation with Hibernate Validator" guide. Obtaining a Validator instance of the "Hibernate Validator 8.0.0.Final - Jakarta Bean Validation Reference Implementation: Reference Guide." 1.2.3.6. Quartz jobs class name change If you are storing jobs for the Quartz extension in a database by using Java Database Connectivity (JDBC), run the following query to update the job class name in your JOB_DETAILS table: UPDATE JOB_DETAILS SET JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerImplUSDInvokerJob' WHERE JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerUSDInvokerJob'; 1.2.3.7. Deprecation of QuarkusTransaction.run and QuarkusTransaction.call methods The QuarkusTransaction.run and QuarkusTransaction.call methods have been deprecated in favor of new, more explicit methods. Update code that relies on these deprecated methods as follows: Before QuarkusTransaction.run(() -> { ... }); QuarkusTransaction.call(() -> { ... }); After QuarkusTransaction.requiringNew().run(() -> { ... }); QuarkusTransaction.requiringNew().call(() -> { ... }); Before QuarkusTransaction.run(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); After QuarkusTransaction.joiningExisting().run(() -> { ... }); QuarkusTransaction.joiningExisting().call(() -> { ... }); Before QuarkusTransaction.run(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); After QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .run(() -> { ... }); QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .call(() -> { ... }); For more information, see the Programmatic Approach section of the "Using transactions in Quarkus" guide. 1.2.3.8. Renamed Narayana transaction manager property With this 3.2 release, the quarkus.transaction-manager.object-store-directory configuration property is renamed to quarkus.transaction-manager.object-store.directory . Update your configuration by replacing the old property name with the new one. 1.2.4. Messaging 1.2.4.1. Removal of vertx-kafka-client dependency from SmallRye Reactive Messaging This release removes the previously deprecated vertx-kafka-client dependency for the smallrye-reactive-messaging-kafka extension. Although it wasn't used for client implementations, vertx-kafka-client provided default Kafka Serialization and Deserialization (SerDes) for io.vertx.core.buffer.Buffer , io.vertx.core.json.JsonObject , and io.vertx.core.json.JsonArray types from the io.vertx.kafka.client.serialization package. If you require this dependency, you can get SerDes for the mentioned types from the io.quarkus.kafka.client.serialization package. 1.2.5. Native 1.2.5.1. Native compilation - Native executables and .so files With this 3.2 release, changes in GraalVM/Mandrel affect the use of extensions reliant on .so files, such as the Java Abstract Window Toolkit (AWT) extension. When using these extensions, you must add or copy the corresponding .so files to the native container; for example: COPY --chown=1001:root target/*.so /work/ COPY --chown=1001:root target/*-runner /work/application Note In this context, the AWT extension provides headless server-side image processing capabilities, not GUI capabilities. 1.2.5.2. Native Compilation - Work around missing CPU features With this 3.2 release, if you build native executables on recent machines and run them on older machines, you might encounter the following failure when starting the application: This error message means that the native compilation used more advanced instruction sets that are unsupported by the CPU running the application. To work around that issue, add the following line to the application.properties file: quarkus.native.additional-build-args=-march=compatibility Then, rebuild your native executable. This setting forces the native compilation to use an older instruction set, increasing the chance of compatibility but decreasing optimization. To explicitly define the target architecture, run native-image -march=list to get a list of supported configurations. Then, specify a target architecture; for example: quarkus.native.additional-build-args=-march=x86-64-v4 If you are experiencing this problem with older AMD64 hosts, try -march=x86-64-v2 before using -march=compatibility . The GraalVM documentation for Native Image Build Options states that "[the -march parameter generates] instructions for a specific machine type. [This parameter] defaults to x86-64-v3 on AMD64 and armv8-a on AArch64. Use -march=compatibility for best compatibility, or -march=native for best performance if a native executable is deployed on the same machine or on a machine with the same CPU features. To list all available machine types, use -march=list ." Note The -march parameter is available only in GraalVM 23 and later. 1.2.5.3. Testing changes: Removal of some annotations With this 3.2 release, the previously deprecated @io.quarkus.test.junit.NativeImageTest and @io.quarkus.test.junit.DisabledOnNativeImageTest annotations have been rimage::images/ref_changes-that-affect-backward-compatibility-88d2f.png[]. Replace them with their new counterparts. Table 1.3. Removed annotations and their new counterparts Removed annotations New annotations @io.quarkus.test.junit.NativeImageTest @io.quarkus.test.junit.QuarkusIntegrationTest @io.quarkus.test.junit.DisabledOnNativeImageTest @io.quarkus.test.junit.DisabledOnIntegrationTest The replacement annotations are functionally equivalent to the removed ones. 1.2.6. Observability 1.2.6.1. Deprecated OpenTracing driver is replaced by OpenTelemetry With this 3.2 release, support for the OpenTracing driver has been deprecated. Removal of the OpenTracing driver is planned for a future Quarkus release. With this 3.2 release, the SmallRye GraphQL extension has replaced its OpenTracing integration with OpenTelemetry. As a result, when using OpenTracing, the extension no longer generates spans for GraphQL operations. Also, with this release, the quarkus.smallrye-graphql.tracing.enabled configuration property is obsolete and has been removed. Instead, the SmallRye GraphQL extension automatically produces spans when the OpenTelemetry extension is present. Update your Quarkus applications to use OpenTelemetry so that they remain compatible with future Quarkus releases. 1.2.6.2. Default metrics format in Micrometer now aligned with Prometheus With this 3.2 release, the Micrometer extension exports metrics in the application/openmetrics-text format by default, in line with the Prometheus standard. This change helps make your data easier to read and interpret. To you get metrics in the earlier format, you can change the Accept request header to text/plain. For example, with the `curl command: curl -H "Accept: text/plain" localhost:8080/q/metrics/ 1.2.6.3. Changes in the OpenTelemetry extension and removal of some sampler-related properties With this 3.2 release, the OpenTelemetry (OTel) extension has significant improvements. Before this release, the OpenTelemetry SDK (OTel SDK) was created at build time and had limited configuration options; most notably, it could not be disabled at run time. Now, it offers enhanced flexibility. It can be disabled at run time by setting quarkus.otel.sdk.disabled=true . After some preparatory steps at build time, the OTel SDK is configured at run time using the OTel auto-configuration feature. This feature supports some of the properties defined in the Java OpenTelemetry SDK. For more information, see the OpenTelemetry SDK Autoconfigure reference. The OpenTelemetry extension is compatible with earlier versions. Most properties have been deprecated but still function alongside the new ones until they are removed in a future release. You can replace the deprecated properties with new ones. Table 1.4. Deprecated properties and their new counterparts Deprecated properties New properties quarkus.opentelemetry.enabled quarkus.otel.enabled quarkus.opentelemetry.tracer.enabled quarkus.otel.traces.enabled quarkus.opentelemetry.propagators quarkus.otel.propagators quarkus.opentelemetry.tracer.suppress-non-application-uris quarkus.otel.traces.suppress-non-application-uris quarkus.opentelemetry.tracer.include-static-resources quarkus.otel.traces.include-static-resources quarkus.opentelemetry.tracer.sampler quarkus.otel.traces.sampler quarkus.opentelemetry.tracer.sampler.ratio quarkus.otel.traces.sampler.arg quarkus.opentelemetry.tracer.exporter.otlp.enabled quarkus.otel.exporter.otlp.enabled quarkus.opentelemetry.tracer.exporter.otlp.headers quarkus.otel.exporter.otlp.traces.headers quarkus.opentelemetry.tracer.exporter.otlp.endpoint quarkus.otel.exporter.otlp.traces.legacy-endpoint With this 3.2 release, some of the old quarkus.opentelemetry.tracer.sampler -related property values have been removed. If the sampler is parent based, there is no need to set the now-dropped quarkus.opentelemetry.tracer.sampler.parent-based property. Replace the following quarkus.opentelemetry.tracer.sampler values with new ones: Table 1.5. Removed sampler property values and their new counterparts Old value New value New value if parent-based on always_on parentbased_always_on off always_off parentbased_always_off ratio traceidratio parentbased_traceidratio Many new properties are now available. For more information, see the Quarkus Using OpenTelemetry guide. Quarkus allowed the Context and Dependency Injection (CDI) configuration of many classes: IdGenerator , Resource attributes, Sampler , and SpanProcessor . This is a feature not available in standard OTel, but it's still provided here for convenience. However, the CDI creation of the SpanProcessor through the LateBoundBatchSpanProcessor is now deprecated. If there's a need to override or customize it, feedback is appreciated. The processor will continue to be used for supporting earlier versions, but soon the standard exports bundled with the OTel SDK will be used. This means the default exporter uses the following configuration: As a preview, the stock OTLP exporter is now available by setting: Additional configurations of the OTel SDK are now available, using the standard Service Provider Interface (SPI) hooks for Sampler and SpanExporter . The remaining SPIs are also accessible, although compatibility validation through testing is still required. For more information, see the updated OpenTelemetry Guide . OpenTelemetry upgrades OpenTelemetry (OTel) 1.23.1 introduced breaking changes, including the following items: HTTP span names are now "{http.method} {http.route}" instead of just "{http.route}" . All methods in all Getter classes in instrumentation-api-semconv have been renamed to use the get() naming scheme. Semantic convention changes: Table 1.6. Deprecated properties and their new counterparts Deprecated properties New properties messaging.destination_kind messaging.destination.kind messaging.destination messaging.destination.name messaging.consumer_id messaging.consumer.id messaging.kafka.consumer_group messaging.kafka.consumer.group JDBC tracing activation Before this release, to activate Java Database Connectivity (JDBC) tracing, you used the following configuration: With this 3.2 release, you can use a much simpler configuration: With this configuration, you do not need to change the database URL or declare a different driver. 1.2.7. Security 1.2.7.1. Removal of CORS filter default support for using a wildcard as an origin The default behavior of the cross-origin resource sharing (CORS) filter has significantly changed. In earlier releases, when the CORS filter was enabled, it supported all origins by default. With this 3.2 release, support for all origins is no longer enabled by default. Now, if you want to permit all origins, you must explicitly configure it to do so. After a thorough evaluation, if you determine that all origins require support, configure the system in the following manner: Same-origin requests receive support without needing the quarkus.http.cors.origins configuration. Therefore, adjusting the quarkus.http.cors.origins becomes essential only when you allow trusted third-party origin requests. In such situations, enabling all origins might pose unnecessary risks. Warning Use this setting with caution to maintain optimal system security. 1.2.7.2. OpenAPI CORS support change With this 3.2 release, OpenAPI has changed its cross-origin resource sharing (CORS) settings and no longer enables wildcard ( * ) origin support by default. This change helps to prevent potential leakage of OpenAPI documents, enhancing the overall security of your applications. Although you can enable wildcard origin support in dev mode , it is crucial to consider the potential security implications. Avoid enabling all origins in a production environment because it exposes your applications to security threats. Ensure your CORS settings align with your production environment's recommended security best practices. 1.2.7.3. Encryption of OIDC session cookie by default With this 3.2 release, the OpenID Connect (OIDC) session cookie, created after the completion of an OIDC Authorization Code Flow, is encrypted by default. In most scenarios, you are unlikely to notice this change. However, if the mTLS or private_key_jwt authentication methods - where the OIDC client private key signs a JSON Web Token (JWT) - are used between Quarkus and the OIDC Provider, an in-memory encryption key gets generated. This key generation can result in some pods failing to decrypt the session cookie, especially in applications dealing with many requests. This situation can arise when a pod attempting to decrypt the cookie isn't the one that encrypted it. If such issues occur, register an encryption secret of 32 characters; for example: An encrypted session cookie can exceed 4096-bytes, which can cause some browsers to ignore it. If this occurs, try one or more of the following steps: Set quarkus.oidc.token-state-manager.split-tokens=true to store ID, access, and refresh tokens in separate cookies. Set quarkus.oidc.token-state-manager.strategy=id-refresh-tokens if there's no need to use the access token as a source of roles to request UserInfo or propagate it to downstream services. Register a custom quarkus.oidc.TokenStateManager Context and Dependency Injection (CDI) bean with the alternative priority set to 1 . If application users access the Quarkus application from within a trusted network, disable the session cookie encryption by applying the following configuration: 1.2.7.4. Default SameSite attribute set to Lax for OIDC session cookie With this 3.2 release, for the Quarkus OpenID Connect (OIDC) extension, the session cookie SameSite attribute is set to Lax by default. In some earlier releases of Quarkus, the OIDC session cookie SameSite attribute was set to Strict by default. This setting introduced unpredictability in how different browsers handled the session cookie. 1.2.7.5. The OIDC ID token audience claim is verified by default With this 3.2 release, the OpenID Connect (OIDC) ID token aud (audience) claim is verified by default. This claim must equal the value of the configured quarkus.oidc.client-id property, as required by the OIDC specification. To override the expected ID token audience value, set the quarkus.oidc.token.audience configuration property. If you deal with a noncompliant OIDC provider that does not set an ID token aud claim, you can set quarkus.oidc.token.audience to any . Warning Setting quarkus.oidc.token.audience to any reduces the security of your 3.2 application. 1.2.7.6. Removal of default password for the JWT key and keystore Before this release, Quarkus used password as the default password for the JSON Web Token (JWT) key and keystore. With this 3.2 release, this default value has been removed. If you are still using the default password, set a new value to replace password for the following properties in the application.properties file: quarkus.oidc-client.credentials.jwt.key-store-password=password quarkus.oidc-client.credentials.jwt.key-password=password 1.2.8. Web 1.2.8.1. Changes to RESTEasy Reactive multipart With this 3.2 release, the following changes impact multipart support in RESTEasy Reactive: Before this release, you could catch all file uploads regardless of the parameter name using the syntax: @RestForm List<FileUpload> all , but this was ambiguous and not intuitive. Now, this form only fetches parameters named all , just like for every other form element of other types, and you must use the following form to catch every parameter regardless of its name: @RestForm(FileUpload.ALL) List<FileUpload> all . Multipart form parameter support has been added to @BeanParam . The @MultipartForm annotation is now deprecated. Use @BeanParam instead of @MultipartForm . The @BeanParam is now optional and implicit for any non-annotated method parameter with fields annotated with any @Rest* or @*Param annotations. Multipart elements are no longer limited to being encapsulated inside @MultipartForm -annotated classes: they can be used as method endpoint parameters and endpoint class fields. Multipart elements now default to the @PartType(MediaType.TEXT_PLAIN) MIME type unless they are of type FileUpload , Path , File , byte[] , or InputStream . Multipart elements of the MediaType.TEXT_PLAIN MIME type are now deserialized using the regular ParamConverter infrastructure. Before this release, deserialization used MessageBodyReader . Multipart elements of the FileUpload , Path , File , byte[] , or InputStream types are special-cased and deserialized by the RESTEasy Reactive extension, not by the MessageBodyReader or ParamConverter classes. Multipart elements of other explicitly set MIME types still use the appropriate MessageBodyReader infrastructure. Multipart elements can now be wrapped in List to obtain all values of the part with the same name. Any client call that includes the @RestForm or @FormParam parameters defaults to the MediaType.APPLICATION_FORM_URLENCODED content type unless they are of the File , Path , Buffer , Multi<Byte> , or byte[] types, in which case it defaults to the MediaType.MULTIPART_FORM_DATA content type. Class org.jboss.resteasy.reactive.server.core.multipart.MultipartFormDataOutput has been moved to org.jboss.resteasy.reactive.server.multipart.MultipartFormDataOutput . Class org.jboss.resteasy.reactive.server.core.multipart.PartItem has been moved to org.jboss.resteasy.reactive.server.multipart.PartItem . Class org.jboss.resteasy.reactive.server.core.multipart.FormData.FormValue has been moved to org.jboss.resteasy.reactive.server.multipart.FormValue . The REST Client no longer uses the server-specific MessageBodyReader and MessageBodyWriter classes associated with Jackson. Before this release, the REST Client unintentionally used those classes. The result is that applications that use both quarkus-resteasy-reactive-jackson and quarkus-rest-client-reactive extensions must now include the quarkus-rest-client-reactive-jackson extension. 1.2.8.2. Enhanced JAXB extension control The JAXB extension detects classes that use JAXB annotations and registers them into the default JAXBContext instance. Before this release, any issues or conflicts between the classes and JAXB triggered a JAXB exception at runtime, providing a detailed description to help troubleshoot the problem. However, you could preemptively tackle these conflicts during the build stage. This release adds a feature that can validate the JAXBContext instance at build time so that you can detect and fix JAXB errors early in the development cycle. For example, as shown in the following code block, binding both classes to the default JAXBContext instance would inevitably lead to a JAXB exception. This is because the classes share the identical name, Model , despite existing in different packages. This concurrent naming creates a conflict, leading to the exception. package org.acme.one; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name1; public String getName1() { return name1; } public void setName1(String name1) { this.name1 = name1; } } package org.acme.two; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name2; public String getName2() { return name2; } public void setName2(String name2) { this.name2 = name2; } } To activate this feature, add the following property: quarkus.jaxb.validate-jaxb-context=true Additionally, this release adds the quarkus.jaxb.exclude-classes property. With this property, you can specify classes to exclude from binding to the JAXBContext . You can provide a comma-separated list of fully qualified class names or a list of packages. For example, to resolve the conflict in the preceding example, you can exclude one or both of the classes: quarkus.jaxb.exclude-classes=org.acme.one.Model,org.acme.two.Model Or you can exclude all the classes under a package: quarkus.jaxb.exclude-classes=org.acme.* 1.3. Additional resources Release notes for Red Hat build of Quarkus version 3.2 | [
"quarkus update",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:update",
"jakarta.enterprise.inject.spi.DeploymentException: @Transactional does not affect method com.acme.MyBean.myMethod() because the method is private. [...]",
"@AlternativePriority(1)",
"@Alternative @Priority(1)",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5-mockito</artifactId> <exclusions> <exclusion> <groupId>org.mockito</groupId> <artifactId>mockito-subclass</artifactId> </exclusion> </exclusions> <dependency>",
"UPDATE JOB_DETAILS SET JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerImplUSDInvokerJob' WHERE JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerUSDInvokerJob';",
"QuarkusTransaction.run(() -> { ... }); QuarkusTransaction.call(() -> { ... });",
"QuarkusTransaction.requiringNew().run(() -> { ... }); QuarkusTransaction.requiringNew().call(() -> { ... });",
"QuarkusTransaction.run(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... });",
"QuarkusTransaction.joiningExisting().run(() -> { ... }); QuarkusTransaction.joiningExisting().call(() -> { ... });",
"QuarkusTransaction.run(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... });",
"QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .run(() -> { ... }); QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .call(() -> { ... });",
"COPY --chown=1001:root target/*.so /work/ COPY --chown=1001:root target/*-runner /work/application",
"The current machine does not support all of the following CPU features that are required by the image: [CX8, CMOV, FXSR, MMX, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, POPCNT, LZCNT, AVX, AVX2, BMI1, BMI2, FMA]. Please rebuild the executable with an appropriate setting of the -march option.",
"quarkus.native.additional-build-args=-march=compatibility",
"quarkus.native.additional-build-args=-march=x86-64-v4",
"curl -H \"Accept: text/plain\" localhost:8080/q/metrics/",
"quarkus.otel.traces.exporter=cdi",
"quarkus.otel.traces.exporter=otlp",
"quarkus.datasource.jdbc.url=jdbc:otel:postgresql://localhost:5432/mydatabase use the 'OpenTelemetryDriver' instead of the one for your database quarkus.datasource.jdbc.driver=io.opentelemetry.instrumentation.jdbc.OpenTelemetryDriver",
"quarkus.datasource.jdbc.telemetry=true",
"quarkus.http.cors=true quarkus.http.cors.origins=/.*/",
"quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"quarkus.oidc.token-state-manager.encryption-required=false",
"quarkus.oidc-client.credentials.jwt.key-store-password=password quarkus.oidc-client.credentials.jwt.key-password=password",
"package org.acme.one; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name1; public String getName1() { return name1; } public void setName1(String name1) { this.name1 = name1; } } package org.acme.two; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name2; public String getName2() { return name2; } public void setName2(String name2) { this.name2 = name2; } }",
"quarkus.jaxb.validate-jaxb-context=true",
"quarkus.jaxb.exclude-classes=org.acme.one.Model,org.acme.two.Model",
"quarkus.jaxb.exclude-classes=org.acme.*"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/migrating_applications_to_red_hat_build_of_quarkus_3.2/assembly_migrating-to-quarkus-3_quarkus-migration |
13.2.24. Installing SSSD Utilities | 13.2.24. Installing SSSD Utilities Additional tools to handle the SSSD cache, user entries, and group entries are contained in the sssd-tools package. This package is not required, but it is useful to install to help administer user accounts. Note The sssd-tools package is provided by the Optional subscription channel. See Section 8.4.8, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. | [
"~]# yum install sssd-tools"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/installing-sssd-tools |
Chapter 6. Security | Chapter 6. Security 6.1. Connecting with a user and password AMQ .NET can authenticate connections with a user and password. To specify the credentials used for authentication, set the user and password fields in the connection URL. Example: Connecting with a user and password Address addr = new Address("amqp:// <user> : <password> @example.com"); Connection conn = new Connection(addr); 6.2. Configuring SASL authentication Client connections to remote peers may exchange SASL user name and password credentials. The presence of the user field in the connection URI controls this exchange. If user is specified then SASL credentials are exchanged; if user is absent then the SASL credentials are not exchanged. By default the client supports EXTERNAL , PLAIN , and ANONYMOUS SASL mechanisms. 6.3. Configuring an SSL/TLS transport Secure communication with servers is achieved using SSL/TLS. A client may be configured for SSL/TLS Handshake only or for SSL/TLS Handshake and client certificate authentication. See the Managing Certificates section for more information. Note TLS Server Name Indication (SNI) is handled automatically by the client library. However, SNI is signaled only for addresses that use the amqps transport scheme where the host is a fully qualified domain name or a host name. SNI is not signaled when the host is a numeric IP address. | [
"Address addr = new Address(\"amqp:// <user> : <password> @example.com\"); Connection conn = new Connection(addr);"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_.net_client/security |
Chapter 1. Migrating Apache Camel | Chapter 1. Migrating Apache Camel 1.1. About the Camel migration guide This guide details the changes in the Apache Camel components that you must consider when migrating your application. This guide provides information about following changes. Supported Java versions Changes to Apache Camel components and deprecated components Changes to APIs and deprecated APIs Updates to EIP Updated to tracing and health checks 1.2. Migrating to Apache Camel 4 This section provides information that can help you migrate your Apache Camel applications from version 3.20 or higher to 4.0. Note For information about individual versions, see: Apache Camel 3.x upgrade guide . Apache Camel 4.x upgrade guide . For information about how to upgrade Apache Camel Quarkus, see: Camel Quarkus migration and upgrade guides . 1.2.1. Java versions Apache Camel 4 supports Java 17. Support for Java 11 is dropped. 1.2.2. Removed Components The following components have been removed: Component Alternative component(s) camel-any23 none camel-atlasmap none camel-atmos none camel-caffeine-lrucache camel-cache, camel-ignite, camel-infinispan camel-cdi camel-spring-boot, camel-quarkus camel-corda none camel-directvm camel-direct camel-dozer camel-mapstruct camel-elasticsearch-rest camel-elasticsearch camel-gora none camel-hbase none camel-hyperledger-aries none camel-iota none camel-ipfs none camel-jbpm none camel-jclouds none camel-johnzon camel-jackson, camel-fastjson, camel-gson camel-microprofile-metrics camel-micrometer, camel-opentelemetry camel-milo none camel-opentracing camel-micrometer, camel-opentelemetry camel-rabbitmq spring-rabbitmq-component camel-rest-swagger camel-openapi-rest camel-restdsl-swagger-plugin camel-restdsl-openapi-plugin camel-resteasy camel-cxf, camel-rest camel-solr none camel-spark none camel-spring-integration none camel-swagger-java camel-openapi-java camel-websocket camel-vertx-websocket camel-websocket-jsr356 camel-vertx-websocket camel-vertx-kafka camel-kafka camel-vm camel-seda camel-weka none camel-xstream camel-jacksonxml camel-zipkin camel-micrometer, camel-opentelemetry 1.2.3. Logging Camel 4 has upgraded logging facade API slf4j-api from 1.7 to 2.0. 1.2.4. JUnit 4 All the camel-test modules that were JUnit 4.x based has been removed. All test modules now use JUnit 5. 1.2.5. API Changes Following APIs are deprecated and removed from version 4: The org.apache.camel.ExchangePattern has removed InOptionalOut . Removed getEndpointMap() method from CamelContext . Removed @FallbackConverter as you should use @Converter(fallback = true) instead. Removed uri attribute on @EndpointInject , @Produce , and @Consume as you should use value (default) instead. For example @Produce(uri = "kafka:cheese") should be changed to @Produce("kafka:cheese") Removed label on @UriEndpoint as you should use category instead. Removed all asyncCallback methods on ProducerTemplate . Use asyncSend or asyncRequest instead. Removed org.apache.camel.spi.OnCamelContextStart . Use org.apache.camel.spi.OnCamelContextStarting instead. Removed org.apache.camel.spi.OnCamelContextStop . Use org.apache.camel.spi.OnCamelContextStopping instead. Decoupled the org.apache.camel.ExtendedCamelContext from the org.apache.camel.CamelContext . Replaced adapt() from org.apache.camel.CamelContext with getCamelContextExtension Decoupled the org.apache.camel.ExtendedExchange from the org.apache.camel.Exchange . Replaced adapt() from org.apache.camel.ExtendedExchange with getExchangeExtension Exchange failure handling status has moved from being a property defined as ExchangePropertyKey.FAILURE_HANDLED to a member of the ExtendedExchange, accessible via `isFailureHandled()`method. Removed Discard and DiscardOldest from org.apache.camel.util.concurrent.ThreadPoolRejectedPolicy . Removed org.apache.camel.builder.SimpleBuilder . Was mostly used internally in Camel with the Java DSL in some situations. Moved org.apache.camel.support.IntrospectionSupport to camel-core-engine for internal use only. End users should use org.apache.camel.spi.BeanInspection instead. Removed archetypeCatalogAsXml method from org.apache.camel.catalog.CamelCatalog . The org.apache.camel.health.HealthCheck method isLiveness is now default false instead of true . Added position method to org.apache.camel.StreamCache . The method configure from the interface org.apache.camel.main.Listener was removed The org.apache.camel.support.EventNotifierSupport abstract class now implements CamelContextAware . The type for dumpRoutes on CamelContext has changed from boolean to String to allow specifying either xml or yaml. Note The org.apache.camel.support.PluginHelper gives easy access to various extensions and context plugins, that was available previously in Camel v3 directly from CamelContext . 1.2.6. EIP Changes Removed lang attribute for the <description> on every EIPs. The InOnly and InOut EIPs has been removed. Instead, use SetExchangePattern or To where you can specify exchange pattern to use. 1.2.6.1. Poll Enrich EIP The polled endpoint URI is now stored as property on the Exchange (with key CamelToEndpoint ) like all other EIPs. Before the URI was stored as a message header. 1.2.6.2. CircuitBreaker EIP The following options in camel-resilience4j was mistakenly not defined as attributes: Option bulkheadEnabled bulkheadMaxConcurrentCalls bulkheadMaxWaitDuration timeoutEnabled timeoutExecutorService timeoutDuration timeoutCancelRunningFuture These options were not exposed in YAML DSL, and in XML DSL you need to migrate from: <circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> ... </circuitBreaker> To use following attributes instead: <circuitBreaker> <resilience4jConfiguration timeoutEnabled="true" timeoutDuration="2000"/> ... </circuitBreaker> 1.2.7. XML DSL The <description> to set a description on a route or node, has been changed from an element to an attribute. Example <route id="myRoute" description="Something that this route do"> <from uri="kafka:cheese"/> ... </route> 1.2.8. Type Converter The String java.io.File converter has been removed. 1.2.9. Tracing The Tracer and Backlog Tracer no longer includes internal tracing events from routes that was created by Rest DSL or route templates or Kamelets. You can turn this on, by setting traceTemplates=true in the tracer. The Backlog Tracer has been enhanced and fixed to trace message headers (also streaming types). This means that previously headers of type InputStream was not traced before, but is now included. This could mean that the header stream is positioned at end, and logging the header afterward, may appear as the header value is empty. 1.2.10. UseOriginalMessage / UseOriginalBody When useOriginalMessage or useOriginalBody is enabled in OnException , OnCompletion or error handlers, then the original message body is defensively copied and if possible converted to StreamCache to ensure the body can be re-read when accessed. Previously the original body was not converted to StreamCache which could lead to the body not able to be read or the stream has been closed. 1.2.11. Camel Health Health checks are now by default only readiness checks out of the box. Camel provides the CamelContextCheck as both readiness and liveness checks, so there is at least one of each out of the box. Only consumer based health-checks is enabled by default. 1.2.11.1. Producer Health Checks The option camel.health.components-enabled has been renamed to camel.health.producers-enabled . Some components (in particular AWS) provides also health checks for producers; in Camel 3.x these health checks did not work properly and has been disabled in the source. To continue this behaviour in Camel 4, the producer based health checks are disabled. Notice that camel-kafka comes with producer based health-check that worked in Camel 3, and therefore this change in Camel 4, means that this health-check is disabled. You MUST enable producer health-checks globally, such as in application.properties : camel.health.producers-enabled = true 1.2.12. JMX Camel now also include MBeans for doCatch and doFinally in the tree of processor MBeans. The ManagedChoiceMBean have renamed choiceStatistics to extendedInformation . The ManagedFailoverLoadBalancerMBean have renamed exceptionStatistics to extendedInformation . The CamelContextMBean and CamelRouteMBean has removed method dumpRouteAsXml(boolean resolvePlaceholders, boolean resolveDelegateEndpoints) . 1.2.13. YAML DSL The backwards compatible mode Camel 3.14 or older, which allowed to have steps as child to route has been removed. The new syntax is: - route: from: uri: "direct:info" steps: - log: "message" 1.2.14. Backlog Tracing The option backlogTracing=true is now automatically enabled to start the tracer on startup. In the versions the tracer was only made available, and had to be manually enabled afterwards. The old behavior can be archived by setting backlogTracingStandby=true . Move the following class from org.apache.camel.api.management.mbean.BacklogTracerEventMessage in camel-management-api JAR to org.apache.camel.spi.BacklogTracerEventMessage in camel-api JAR. The org.apache.camel.impl.debugger.DefaultBacklogTracerEventMessage has been refactored into an interface org.apache.camel.spi.BacklogTracerEventMessage with some additional details about traced messages. For example Camel now captures a first and last trace that contains the input and outgoing (if InOut ) messages. 1.2.15. XML serialization The default xml serialization using ModelToXMLDumper has been improved and now uses a generated xml serializer located in the camel-xml-io module instead of the JAXB based one from camel-jaxb . 1.2.16. OpenAPI Maven Plugin The camel-restdsl-openapi-plugin Maven plugin now uses platform-http as the default rest component in the generated Rest DSL code, as it is a better default that works out of the box with Quarkus. 1.2.17. Component changes 1.2.17.1. Category The number of enums for org.apache.camel.Category has been reduced from 83 to 37, which means custom components that are using removed values need to choose one of the remainder values. We have done this to consolidate the number of categories of all components in the Camel community. 1.2.17.2. camel-openapi-rest-dsl-generator This dsl-generator has updated the underlying model classes ( apicurio-data-models ) from 1.1.27 to 2.0.3. 1.2.17.3. camel-atom The camel-atom component has changed the 3rd party atom client from Apache Abdera to RSSReader. This means the feed object is changed from org.apache.abdera.model.Feed to com.apptasticsoftware.rssreader.Item . 1.2.17.4. camel-azure-cosmosdb The itemPartitionKey has been updated. It's now a String a not a PartitionKey anymore. More details in CAMEL-19222. 1.2.17.5. camel-bean When using the method option to refer to a specific method, and using parameter types and values, such as: "bean:myBean?method=foo(com.foo.MyOrder, true)" then any class types must now be using .class syntax, i.e. com.foo.MyOrder should now be com.foo.MyOrder.class . Example This also applies to Java types such as String, int. 1.2.17.6. camel-box Upgraded from Box Java SDK v2 to v4, which have some method signature changes. The method to get a file thumbnail is no longer available. 1.2.17.7. camel-caffeine The keyType parameter has been removed. The Key for the cache will now be only String type. More information in CAMEL-18877. 1.2.17.8. camel-fhir The underlying hapi-fhir library has been upgraded from 4.2.0 to 6.2.4. Only the Delete API method has changed and now returns ca.uhn.fhir.rest.api.MethodOutcome instead of org.hl7.fhir.instance.model.api.IBaseOperationOutcome . See hapi-fhir for a more detailed list of underlying changes (only the hapi-fhir client is used in Camel). 1.2.17.9. camel-google The API based components camel-google-drive , camel-google-calendar , camel-google-sheets and camel-google-mail has been upgraded from Google Java SDK v1 to v2 and to latest API revisions. The camel-google-drive and camel-google-sheets have some API methods changes, but the others are identical as before. 1.2.17.10. camel-http The component has been upgraded to use Apache HttpComponents v5 which has an impact on how the underlying client is configured. There are 4 different timeouts ( connectionRequestTimeout , connectTimeout , soTimeout and responseTimeout ) instead of initially 3 ( connectionRequestTimeout , connectTimeout and socketTimeout ) and the default value of some of them has changed. Refer to the documentation for more details. Note that the socketTimeout has been removed from the possible configuration parameters of HttpClient , use responseTimeout instead. Finally, the option soTimeout along with any parameters included into SocketConfig , need to be prefixed by httpConnection. , the rest of the parameters including those defined into HttpClientBuilder and RequestConfig still need to be prefixed by httpClient. like before. 1.2.17.11. camel-http-common The API in org.apache.camel.http.common.HttpBinding has changed slightly to be more reusable. The parseBody method now takes in HttpServletRequest as input parameter. And all HttpMessage has been changed to generic Message types. 1.2.17.12. camel-kubernetes The io.fabric8:kubernetes-client library has been upgraded and some deprecated API usage has been removed. Operations previously prefixed with replace are now prefixed with update . For example replaceConfigMap is now updateConfigMap , replacePod is now updatePod etc. The corresponding constants in class KubernetesOperations are also renamed. REPLACE_CONFIGMAP_OPERATION is now UPDATE_CONFIGMAP_OPERATION , REPLACE_POD_OPERATION is now UPDATE_POD_OPERATION etc. 1.2.17.13. camel-web3j The camel-web3j has upgrade web3j JAR from 3.x to 5.0 which has many API changes, and so some API calls are no long provided. 1.2.17.14. camel-main The following constants have been moved from BaseMainSupport / Main to MainConstants : Old Name New Name Main.DEFAULT_PROPERTY_PLACEHOLDER_LOCATION MainConstants.DEFAULT_PROPERTY_PLACEHOLDER_LOCATION Main.INITIAL_PROPERTIES_LOCATION MainConstants.INITIAL_PROPERTIES_LOCATION Main.OVERRIDE_PROPERTIES_LOCATION MainConstants.OVERRIDE_PROPERTIES_LOCATION Main.PROPERTY_PLACEHOLDER_LOCATION MainConstants.PROPERTY_PLACEHOLDER_LOCATION 1.2.17.15. camel-micrometer The metrics has been renamed to follow Micrometer naming convention . Old Name New Name CamelExchangeEventNotifier camel.exchange.event.notifier CamelExchangesFailed camel.exchanges.failed CamelExchangesFailuresHandled camel.exchanges.failures.handled CamelExchangesInflight camel.exchanges.external.redeliveries CamelExchangesSucceeded camel.exchanges.succeeded CamelExchangesTotal camel.exchanges.total CamelMessageHistory camel.message.history CamelRoutePolicy camel.route.policy CamelRoutePolicyLongTask camel.route.policy.long.task CamelRoutesAdded camel.routes.added CamelRoutesRunning camel.routes.running 1.2.17.16. camel-jbang The command camel dependencies has been renamed to camel dependency . In Camel JBang the -dir parameter for init and run goal has been renamed to require 2 dashes --dir like all the other options. The camel stop command will now by default stop all running integrations (the option --all has been removed). The Placeholders substitutes is changed to use #name instead of USDname syntax. 1.2.17.17. camel-openapi-java The camel-openapi-java component has been changed to use io.swagger.v3 libraries instead of io.apicurio.datamodels . As a result, the return type of the public method org.apache.camel.openapi.RestOpenApiReader.read() is now io.swagger.v3.oas.models.OpenAPI instead of io.apicurio.datamodels.openapi.models.OasDocument . When an OpenAPI 2.0 (swagger) specification is parsed, it is automatically upgraded to OpenAPI 3.0.x by the swagger parser. This version also supports OpenAPI 3.1.x specifications. 1.2.17.18. camel-optaplanner The camel-optaplanner component has been change to use SolverManager . If you were using SoverManager in Camel 3, you don't need anymore the boolean useSolverManager in the Route. Deprecated ProblemFactChange has been replaced by ProblemChange . The new URI path is: from("optaplanner:myProblemName") .to("...") You can pass the Optaplanner SolverManager in 2 ways: as #parameter as header When running camel-optaplanner on Quarkus, use the Quarkus way of creating the SolverManager. You can migrate legacy Camel Optaplanner Routes, which will allow Camel Optaplanner to handle creating the SolverManager for those legacy Routes, by providing the XML config file, as show in the code below: Providing Optaplanner Routes XML config file from("optaplanner:myProblemName?configFile=PATH/TO/CONFIG.FILE.xml") .to("...") NOTE Solver Daemon solutions should be migrated to use SolverManager. 1.2.17.19. camel-platform-http-vertx If the route or consumer is suspended then http status 503 is now returned instead of 404. 1.2.17.20. camel-salesforce Property names of blob fields on generated DTOs no longer have 'Url' affixed. For example, the ContentVersionUrl property is now ContentVersion . 1.2.17.21. camel-slack The default delay (on slack consumer) is changed from 0.5s to 10s to avoid being rate limited to often by Slack. 1.2.17.22. camel-micrometer-starter The uri tags are now static instead of dynamic (by default), as potential too many tags generated due to URI with dynamic values. This can be enabled again by setting camel.metrics.uriTagDynamic=true . 1.2.17.23. camel-platform-http-starter The platform-http-starter has been changed from using camel-servlet to use the HTTP server directly. Therefore, all the HTTP endpoints are no longer prefixed with the servlet context-path (default is camel ). For example: HTTP endpoint from("platform-http:myservice") .to("...") The endpoint can be called with http://localhost:8080/myservice , as the context-path is not in use. Note The platform-http-starter can also be used with Rest DSL. If the route or consumer is suspended then http status 503 is now returned instead of 404. 1.2.17.24. camel-twitter The camel-twitter component was updated to use Twitter4j version 4.1.2, which has moved the packages used by a few of its classes. If accessing certain twitter-related data, such as the Twit status, you need to update the packages used from twitter4j.Status to twitter4j.v1.Status . 1.3. Migrating to Apache Camel 3 This guide provides information on migrating from Red Hat Fuse 7 to Camel 3 Note There are important differences between Fuse 7 and Camel 3 in the components, such as modularization and XML Schema changes. See each component section for details. Red Hat build of Apache Camel for Quarkus supports Camel version 4. This section provides information relating to upgrading Camel when you migrate your Red Hat Fuse 7 application to Red Hat build of Apache Camel for Quarkus with Camel version 3. 1.3.1. Java versions Camel 3 supports Java 17 and Java 11 but not Java 8. 1.3.1.1. JAXB removed in JDK 11 In Java 11 the JAXB modules have been removed from the JDK, therefore you will need to add them as Maven dependencies (if you use JAXB such as when using XML DSL or the camel-jaxb component): <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency> Note The Java Platform Standard Edition 11 Development Kit (JDK 11) is deprecated in release version Camel 3.x and is not supported in release versions 4.x. 1.3.2. Modularization of camel-core In Camel 3.x, camel-core has been split into many JARs as follows: camel-api camel-base camel-caffeine-lrucache camel-cloud camel-core camel-jaxp camel-main camel-management-api camel-management camel-support camel-util camel-util-json Maven users of Apache Camel can keep using the dependency camel-core which has transitive dependencies on all of its modules, except for camel-main , and therefore no migration is needed. 1.3.3. Modularization of Components In Camel 3.x, some of the camel-core components are moved into individual components. camel-attachments camel-bean camel-browse camel-controlbus camel-dataformat camel-dataset camel-direct camel-directvm camel-file camel-language camel-log camel-mock camel-ref camel-rest camel-saga camel-scheduler camel-seda camel-stub camel-timer camel-validator camel-vm camel-xpath camel-xslt camel-xslt-saxon camel-zip-deflater 1.3.4. Multiple CamelContexts per application not supported Support for multiple CamelContexts has been removed and only one CamelContext per deployment is recommended and supported. The context attribute on the various Camel annotations such as @EndpointInject , @Produce , @Consume etc. has therefore been removed. 1.3.5. Deprecated APIs and Components All deprecated APIs and components from Camel 2.x have been removed in Camel 3. 1.3.5.1. Removed components All deprecated components from Camel 2.x are removed in Camel 3.x: camel-http , camel-hdfs , camel-mina , camel-mongodb , camel-netty , camel-netty-http , camel-quartz , camel-restlet , camel-rx , camel-jibx , camel-boon dataformat , camel-linkedin The Linkedin API is no longer supported . camel-zookeeper The component route policy functionality is removed. Use ZooKeeperClusterService or the camel-zookeeper-master instead. camel-jetty No longer supports producer (which has been removed). Use camel-http component instead. twitter-streaming Removed as it relied on the deprecated Twitter Streaming API and is no longer functional. 1.3.5.2. Renamed components The following components are renamed in Camel 3.x. camel-microprofile-metrics Renamed to camel-micrometer test Renamed to dataset-test and moved out of camel-core into camel-dataset JAR. http4 Renamed to http , and it's corresponding component package from org.apache.camel.component.http4 to org.apache.camel.component.http . The supported schemes are now only http and https . hdfs2 Renamed to hdfs , and it's corresponding component package from org.apache.camel.component.hdfs2 to org.apache.camel.component.hdfs . The supported scheme is now hdfs . mina2 Renamed to mina , and it's corresponding component package from org.apache.camel.component.mina2 to org.apache.camel.component.mina . The supported scheme is now mina . mongodb3 Renamed to mongodb , and it's corresponding component package from org.apache.camel.component.mongodb3 to org.apache.camel.component.mongodb . The supported scheme is now mongodb . netty4-http been renamed to netty-http , and it's corresponding component package from org.apache.camel.component.netty4.http to org.apache.camel.component.netty.http . The supported scheme is now netty-http . netty4 Renamed to netty , and it's corresponding component package from org.apache.camel.component.netty4 to org.apache.camel.component.netty . The supported scheme is now netty . quartz2 Renamed to quartz , and it's corresponding component package from org.apache.camel.component.quartz2 to org.apache.camel.component.quartz . The supported scheme is now quartz . rxjava2 Renamed to rxjava , and it's corresponding component package from org.apache.camel.component.rxjava2 to org.apache.camel.component.rxjava . camel-jetty9 Renamed to camel-jetty . The supported scheme is now jetty . 1.3.6. Changes to Camel components 1.3.6.1. Mock component The mock component has been moved out of camel-core . Because of this a number of methods on its assertion clause builder are removed. 1.3.6.2. ActiveMQ If you are using the activemq-camel component, then you should migrate to use camel-activemq component, where the component name has changed from org.apache.activemq.camel.component.ActiveMQComponent to org.apache.camel.component.activemq.ActiveMQComponent . 1.3.6.3. AWS The component camel-aws has been split into multiple components: camel-aws-cw camel-aws-ddb (which contains both ddb and ddbstreams components) camel-aws-ec2 camel-aws-iam camel-aws-kinesis (which contains both kinesis and kinesis-firehose components) camel-aws-kms camel-aws-lambda camel-aws-mq camel-aws-s3 camel-aws-sdb camel-aws-ses camel-aws-sns camel-aws-sqs camel-aws-swf Note It is recommended to add specific dependencies for these components. 1.3.6.4. Camel CXF The camel-cxf JAR has been divided into SOAP vs REST. We recommended you choose the specific JAR from the following list when migrating from camel-cxf . camel-cxf-soap camel-cxf-rest camel-cxf-transport For example, if you were using CXF for SOAP, then select camel-cxf-soap and camel-cxf-transport when migrating from camel-cxf . 1.3.6.4.1. Camel CXF changed namespaces The camel-cxf XML XSD schemas has also changed namespaces. Table 1.1. Changes to namespaces Old Namespace New Namespace http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxrs http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxrs/camel-cxf.xsd The camel-cxf SOAP component is moved to a new jaxws sub-package, that is, org.apache.camel.component.cxf is now org.apache.camel.component.cxf.jaws . For example, the CxfComponent class is now located in org.apache.camel.component.cxf.jaxws . 1.3.6.5. FHIR The camel-fhir component has upgraded it's hapi-fhir dependency to 4.1.0. The default FHIR version has been changed to R4. Therefore, if DSTU3 is desired it has to be explicitly set. 1.3.6.6. Kafka The camel-kafka component has removed the options bridgeEndpoint and circularTopicDetection as this is no longer needed as the component is acting as bridging would work on Camel 2.x. In other words camel-kafka will send messages to the topic from the endpoint uri. To override this use the KafkaConstants.OVERRIDE_TOPIC header with the new topic. See more details in the camel-kafka component documentation. 1.3.6.7. Telegram The camel-telegram component has moved the authorization token from uri-path to a query parameter instead, e.g. migrate to 1.3.6.8. JMX If you run Camel standalone with just camel-core as a dependency, and you want JMX enabled out of the box, then you need to add camel-management as a dependency. For using ManagedCamelContext you now need to get this extension from CamelContext as follows: 1.3.6.9. XSLT The XSLT component has moved out of camel-core into camel-xslt and camel-xslt-saxon . The component is separated so camel-xslt is for using the JDK XSTL engine (Xalan), and camel-xslt-saxon is when you use Saxon. This means that you should use xslt and xslt-saxon as component name in your Camel endpoint URIs. If you are using XSLT aggregation strategy, then use org.apache.camel.component.xslt.saxon.XsltSaxonAggregationStrategy for Saxon support. And use org.apache.camel.component.xslt.saxon.XsltSaxonBuilder for Saxon support if using xslt builder. Also notice that allowStax is also only supported in camel-xslt-saxon as this is not supported by the JDK XSLT. 1.3.6.10. XML DSL Migration The XML DSL has been changed slightly. The custom load balancer EIP has changed from <custom> to <customLoadBalancer> The XMLSecurity data format has renamed the attribute keyOrTrustStoreParametersId to keyOrTrustStoreParametersRef in the <secureXML> tag. The <zipFile> data format has been renamed to <zipfile> . 1.3.7. Migrating Camel Maven Plugins The camel-maven-plugin has been split up into two maven plugins: camel-maven-plugin camel-maven-plugin has the run goal, which is intended for quickly running Camel applications standalone. See https://camel.apache.org/manual/camel-maven-plugin.html for more information. camel-report-maven-plugin The camel-report-maven-plugin has the validate and route-coverage goals which is used for generating reports of your Camel projects such as validating Camel endpoint URIs and route coverage reports, etc. See https://camel.apache.org/manual/camel-report-maven-plugin.html for more information. | [
"<circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> </circuitBreaker>",
"<circuitBreaker> <resilience4jConfiguration timeoutEnabled=\"true\" timeoutDuration=\"2000\"/> </circuitBreaker>",
"<route id=\"myRoute\" description=\"Something that this route do\"> <from uri=\"kafka:cheese\"/> </route>",
"camel.health.producers-enabled = true",
"- route: from: uri: \"direct:info\" steps: - log: \"message\"",
"\"bean:myBean?method=foo(com.foo.MyOrder.class, true)\"",
"\"bean:myBean?method=bar(String.class, int.class)\"",
"from(\"optaplanner:myProblemName\") .to(\"...\")",
"from(\"optaplanner:myProblemName?configFile=PATH/TO/CONFIG.FILE.xml\") .to(\"...\")",
"from(\"platform-http:myservice\") .to(\"...\")",
"<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency>",
"telegram:bots/myTokenHere",
"telegram:bots?authorizationToken=myTokenHere",
"ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_apache_camel/migrating-from-camel-to-camel |
8.179. pm-utils | 8.179. pm-utils 8.179.1. RHBA-2014:1455 - pm-utils bug fix update Updated pm-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The pm-utils packages contain a set of utilities and scripts for tasks related to power management. Bug Fix BZ# 1025006 Previously, pm-utils did not support the Advanced Configuration and Power Interfaces (ACPI) S1 (Power on Suspend) power state. As a consequence, when BIOS supported the ACPI S3 (Suspend to RAM) power state but not the S1 power state, the "pm-suspend" command failed. This update introduces support for the S1 power state, and if the S3 power state is not supported by BIOS, pm-suspend now triggers the S1 power state. Users of pm-utils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pm-utils |
Using SELinux for SAP HANA | Using SELinux for SAP HANA Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/index |
Chapter 13. Self-heal does not complete | Chapter 13. Self-heal does not complete If a self-heal operation never completes, the cause could be a Gluster File ID (GFID) mismatch. 13.1. Gluster File ID mismatch Diagnosis Check self-heal state. Run the following command several times over a few minutes. Note the entries that are shown. If the same entries are shown each time, these entries may have a GFID mismatch. Check the GFID of each entry on each host. On each host, run the following command for each entry: The <backend_path> for an entry is comprised of the brick path and the entry. For example, if the brick for the engine volume has the path of /gluster_bricks/engine/engine and the entry shown in heal info is 58d392a6-e5b1-4aed-9bbc-952210a7137d/ha_agent/hosted-engine.metadata , the backend_path to use is /gluster_bricks/engine/engine/58d392a6-e5b1-4aed-9bbc-952210a7137d/ha_agent/hosted-engine.metadata . Compare the output from each host. If the trusted.gfid for an entry is not the same on all hosts, there is a GFID mismatch. Solution Resolve the mismatch in favor of the GFID with the most recent modification time: For example: Manually trigger a heal on the volume. | [
"gluster volume heal <volname> info",
"getfattr -d -m. -ehex <backend_path> -h",
"gluster volume heal <volume> split-brain latest-mtime <entry>",
"gluster volume heal engine split-brain latest-mtime /58d392a6-e5b1-4aed-9bbc-952210a7137d/ha_agent/hosted-engine.metadata",
"gluster volume heal <volname>"
]
| https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/tshoot-self-heal-does-not-complete |
7.298. boost | 7.298. boost 7.298.1. RHBA-2013:0692 - boost bug fix update Updated boost packages that fix one bug are now available for Red Hat Enterprise Linux 6. The boost packages provide free peer-reviewed portable C++ source libraries with emphasis on libraries which work well with the C++ Standard Library. Bug Fix BZ# 921441 Users experienced problems when trying to build MongoDB, because the version of boost (1.41), which was installed by default on Red Hat Enterprise Linux 6.4 had codes that violated the compilation rules which the version of GCC (4.4.7) verified. The version of GCC did not check for the error in the boost code, and this caused builds to fail for any projects that included the boost/thread.h header file from boost version 1.41. This update fixes this bug by explicitly spelling out the full destructor definition of the boost::exception_ptr class, and the version is now fully compatible with version 4.4.7. Users of boost are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/boost |
B.8.2. RHBA-2011:0361 - cluster and gfs2-utils bug fix update | B.8.2. RHBA-2011:0361 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix a bug are now available for Red Hat Enterprise Linux 6. The cluster packages contain the core clustering libraries for Red Hat High Availability as well as utilities to maintain GFS2 file systems for users of Red Hat Resilient Storage. Bug Fix BZ# 643279 Due to an incorrect conversion of directory inodes with the height larger than 1, running the gfs2_convert utility on a file system with extremely large directories may have caused the file system to become corrupted. With this update, the underlying source code has been modified to target this issue, and the gfs2_convert utility now works as expected. All users of Red Hat High Availability and Red Hat Resilient Storage are advised to upgrade to these updated packages, which resolve this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-0361 |
Chapter 16. DHCP Servers | Chapter 16. DHCP Servers Dynamic Host Configuration Protocol ( DHCP ) is a network protocol that automatically assigns TCP/IP information to client machines. Each DHCP client connects to the centrally located DHCP server, which returns the network configuration (including the IP address, gateway, and DNS servers) of that client. 16.1. Why Use DHCP? DHCP is useful for automatic configuration of client network interfaces. When configuring the client system, you can choose DHCP instead of specifying an IP address, netmask, gateway, or DNS servers. The client retrieves this information from the DHCP server. DHCP is also useful if you want to change the IP addresses of a large number of systems. Instead of reconfiguring all the systems, you can just edit one configuration file on the server for the new set of IP addresses. If the DNS servers for an organization changes, the changes happen on the DHCP server, not on the DHCP clients. When you restart the network or reboot the clients, the changes go into effect. If an organization has a functional DHCP server correctly connected to a network, laptops and other mobile computer users can move these devices from office to office. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-DHCP_Servers |
5.3. Booleans | 5.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: allow_ftpd_use_nfs When enabled, this Boolean allows the ftpd daemon to access NFS volumes. cobbler_use_nfs When enabled, this Boolean allows the cobblerd daemon to access NFS volumes. git_system_use_nfs When enabled, this Boolean allows the Git system daemon to read system shared repositories on NFS volumes. httpd_use_nfs When enabled, this Boolean allows the httpd daemon to access files stored on NFS volumes. qemu_use_nfs When enabled, this Boolean allows Qemu to use NFS volumes. rsync_use_nfs When enabled, this Boolean allows rsync servers to share NFS volumes. samba_share_nfs When enabled, this Boolean allows the smbd daemon to share NFS volumes. When disabled, this Boolean prevents smbd from having full access to NFS shares via Samba. sanlock_use_nfs When enabled, this Boolean allows the sanlock daemon to manage NFS volumes. sge_use_nfs When enabled, this Boolean allows the sge scheduler to access NFS volumes. use_nfs_home_dirs When enabled, this Boolean adds support for NFS home directories. virt_use_nfs When enabled, this Boolean allows confident virtual guests to manage files on NFS volumes. xen_use_nfs When enabled, this Boolean allows Xen to manage files on NFS volumes. git_cgi_use_nfs When enabled, this Boolean allows the Git Common Gateway Interface ( CGI ) to access NFS volumes. tftp_use_nfs When enabled, this Boolean allows The Trivial File Transfer Protocol ( TFTP ) to read from NFS volumes for public file transfer services. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, run the following command as root: | [
"~]# semanage boolean -l | grep service_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-nfs-booleans |
function::task_nice | function::task_nice Name function::task_nice - The nice value of the task. Synopsis Arguments task task_struct pointer. General Syntax task_nice:long(task:long) Description This function returns the nice value of the given task. | [
"function task_nice:long(task:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-nice |
Chapter 6. Testing your Fujitsu ETERNUS configuration | Chapter 6. Testing your Fujitsu ETERNUS configuration After you configure the Block Storage service to use the new ETERNUS back ends, declare a volume type for each back end. Use volume types to specify which back end to use when you create new volumes. Create a Fibre Channel back end and map it to the respective back end with the following commands: Create an iSCSI back end and map it to the respective back end with the following commands: For more information about volume types, see Chapter 4, Creating the Fujitsu ETERNUS environment file : Create a 1GB iSCSI volume named test_iscsi to verify your configuration: Test the Fibre Channel back end: | [
"cinder type-create FJFC cinder type-key FJFC set volume_backend_name=FJFC",
"cinder type-create FJISCSI cinder type-key FJISCSI volume_backend_name=FJISCSI",
"cinder create --volume_type FJISCSI --display_name test_iscsi 1",
"cinder create --volume_type FJFC --display_name test_fc 1"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/fujitsu_eternus_back_end_guide/test |
Chapter 3. Preparing software for RPM packaging | Chapter 3. Preparing software for RPM packaging To prepare a piece of software for packaging with RPM, you can first patch the software, create a LICENSE file for it, and archive it as a tarball. 3.1. Patching software When packaging software, you might need to make certain changes to the original source code, such as fixing a bug or changing a configuration file. In RPM packaging, you can instead leave the original source code intact and apply patches on it. A patch is a piece of text that updates a source code file. The patch has a diff format, because it represents the difference between two versions of the text. You can create a patch by using the diff utility, and then apply the patch to the source code by using the patch utility. Note Software developers often use Version Control Systems such as Git to manage their code base. Such tools offer their own methods of creating diffs or patching software. 3.1.1. Creating a patch file for a sample C program You can create a patch from the original source code by using the diff utility. For example, to patch a Hello world program written in C ( cello.c ), complete the following steps. Prerequisites You installed the diff utility on your system: Procedure Back up the original source code: The -p option preserves mode, ownership, and timestamps. Modify cello.c as needed: Generate a patch: Lines that start with + replace the lines that start with - . Note Using the Naur options with the diff command is recommended because it fits the majority of use cases: -N ( --new-file ) The -N option handles absent files as empty files. -a ( --text ) The -a option treats all files as text. As a result, the diff utility does not ignore the files it classified as binaries. -u ( -U NUM or --unified[=NUM] ) The -u option returns output in the form of output NUM (default 3) lines of unified context. This is a compact and an easily readable format commonly used in patch files. -r ( --recursive ) The -r option recursively compares any subdirectories that the diff utility found. However, note that in this particular case, only the -u option is necessary. Save the patch to a file: Restore the original cello.c : Important You must retain the original cello.c because the RPM package manager uses the original file, not the modified one, when building an RPM package. For more information, see Working with spec files . Additional resources diff(1) man page 3.1.2. Patching a sample C program To apply code patches on your software, you can use the patch utility. Prerequisites You installed the patch utility on your system: You created a patch from the original source code. For instructions, see Creating a patch file for a sample C program . Procedure The following steps apply a previously created cello.patch file on the cello.c file. Redirect the patch file to the patch command: Check that the contents of cello.c now reflect the desired change: Verification Build the patched cello.c program: Run the built cello.c program: 3.2. Creating a LICENSE file It is recommended that you distribute your software with a software license. A software license file informs users of what they can and cannot do with a source code. Having no license for your source code means that you retain all rights to this code and no one can reproduce, distribute, or create derivative works from your source code. Procedure Create the LICENSE file with the required license statement: Example 3.1. Example GPLv3 LICENSE file text Additional resources Sorce code examples 3.3. Creating a source code archive for distribution An archive file is a file with the .tar.gz or .tgz suffix. Putting source code into the archive is a common way to release the software to be later packaged for distribution. 3.3.1. Creating a source code archive for a sample Bash program The bello project is a Hello World file in Bash . The following example contains only the bello shell script. Therefore, the resulting tar.gz archive has only one file in addition to the LICENSE file. Note The patch file is not distributed in the archive with the program. The RPM package manager applies the patch when the RPM is built. The patch will be placed into the ~/rpmbuild/SOURCES/ directory together with the tar.gz archive. Prerequisites Assume that the 0.1 version of the bello program is used. You created a LICENSE file. For instructions, see Creating a LICENSE file . Procedure Move all required files into a single directory: Create the archive for distribution: Move the created archive to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: Additional resources Hello World written in bash 3.3.2. Creating a source code archive for a sample Python program The pello project is a Hello World file in Python . The following example contains only the pello.py program. Therefore, the resulting tar.gz archive has only one file in addition to the LICENSE file. Note The patch file is not distributed in the archive with the program. The RPM package manager applies the patch when the RPM is built. The patch will be placed into the ~/rpmbuild/SOURCES/ directory together with the tar.gz archive. Prerequisites Assume that the 0.1.1 version of the pello program is used. You created a LICENSE file. For instructions, see Creating a LICENSE file . Procedure Move all required files into a single directory: Create the archive for distribution: Move the created archive to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: Additional resources Hello World written in Python 3.3.3. Creating a source code archive for a sample C program The cello project is a Hello World file in C. The following example contains only the cello.c and the Makefile files. Therefore, the resulting tar.gz archive has two files in addition to the LICENSE file. Note The patch file is not distributed in the archive with the program. The RPM package manager applies the patch when the RPM is built. The patch will be placed into the ~/rpmbuild/SOURCES/ directory together with the tar.gz archive. Prerequisites Assume that the 1.0 version of the cello program is used. You created a LICENSE file. For instructions, see Creating a LICENSE file . Procedure Move all required files into a single directory: Create the archive for distribution: Move the created archive to the ~/rpmbuild/SOURCES/ directory, which is the default directory where the rpmbuild command stores the files for building packages: Additional resources Hello World written in C | [
"yum install diffutils",
"cp -p cello.c cello.c.orig",
"#include <stdio.h> int main(void) { printf(\"Hello World from my very first patch!\\n\"); return 0; }",
"diff -Naur cello.c.orig cello.c --- cello.c.orig 2016-05-26 17:21:30.478523360 -0500 + cello.c 2016-05-27 14:53:20.668588245 -0500 @@ -1,6 +1,6 @@ #include<stdio.h> int main(void){ - printf(\"Hello World!\\n\"); + printf(\"Hello World from my very first patch!\\n\"); return 0; } \\ No newline at end of file",
"diff -Naur cello.c.orig cello.c > cello.patch",
"mv cello.c.orig cello.c",
"yum install patch",
"patch < cello.patch patching file cello.c",
"cat cello.c #include<stdio.h> int main(void){ printf(\"Hello World from my very first patch!\\n\"); return 1; }",
"make gcc -g -o cello cello.c",
"./cello Hello World from my very first patch!",
"vim LICENSE",
"cat /tmp/LICENSE This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/ .",
"mkdir bello-0.1 mv ~/bello bello-0.1/ mv LICENSE bello-0.1/",
"tar -cvzf bello-0.1.tar.gz bello-0.1 bello-0.1/ bello-0.1/LICENSE bello-0.1/bello",
"mv bello-0.1.tar.gz ~/rpmbuild/SOURCES/",
"mkdir pello-0.1.1 mv pello.py pello-0.1.1/ mv LICENSE pello-0.1.1/",
"tar -cvzf pello-0.1.1.tar.gz pello-0.1.1 pello-0.1.1/ pello-0.1.1/LICENSE pello-0.1.1/pello.py",
"mv pello-0.1.1.tar.gz ~/rpmbuild/SOURCES/",
"mkdir cello-1.0 mv cello.c cello-1.0/ mv Makefile cello-1.0/ mv LICENSE cello-1.0/",
"tar -cvzf cello-1.0.tar.gz cello-1.0 cello-1.0/ cello-1.0/Makefile cello-1.0/cello.c cello-1.0/LICENSE",
"mv cello-1.0.tar.gz ~/rpmbuild/SOURCES/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/preparing-software-for-rpm-packaging_packaging-and-distributing-software |
OAuth APIs | OAuth APIs OpenShift Container Platform 4.15 Reference guide for Oauth APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/oauth_apis/index |
1.3. Query Termination | 1.3. Query Termination 1.3.1. Canceling Queries When a query is canceled, processing will be stopped in the query engine and in all connectors involved in the query. The semantics of what a connector does in response to a cancellation command is dependent on the connector implementation. For example, JDBC connectors will asynchronously call cancel on the underlying JDBC driver, which may or may not actually support this method. 1.3.2. User Query Timeouts User query timeouts in Data Virtualization can be managed on the client-side or server-side. Timeouts are only relevant for the first record returned. If the first record has not been received by the client within the specified timeout period, a "cancel" command is issued to the server for the request and no results are returned to the client. The cancel command is issued asynchronously by the JDBC API without the client's intervention. The JDBC API uses the query timeout set by the java.sql.Statement.setQueryTimeout method. You can also set a default statement timeout via the connection property QUERYTIMEOUT . ODBC clients may also use QUERYTIMEOUT as an execution property via a set statement to control the default timeout setting. See Red Hat JBoss Development Guide: Client Development for more on connection/execution properties and set statements. Server-side timeouts start when the query is received by the engine. The timeout will be canceled if the first result is sent back before the timeout has ended. See Section 6.2, "VDB Definition: The VDB Element" for more on setting the query-timeout VDB property. See the Red Hat JBoss Administration Guide for more information on setting the default query timeout for all queries. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-query_termination |
8.142. perl | 8.142. perl 8.142.1. RHBA-2013:1534 - perl bug fix and enhancement update Updated perl packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Perl is a high-level programming language that is commonly used for system administration utilities and web programming. Note The perl package has been upgraded to upstream version 2.021, which provides a number of bug fixes and enhancements over the version. Support for 64-bit ZIP archives has been improved. Especially, size of files bigger than 2^32 bytes is now reported properly. (BZ# 810469 ) Bug Fixes BZ# 767608 Previously, referring to a named capturing group with non-matching name caused a memory leak. With this update, the underlying source code has been modified to avoid memory leaks in this scenario. BZ# 819042 When the parse_file() function from the Pod::Man or Pod::Text modules was executed without specifying the function output, parse_file() terminated. With this update, parse_file() has been modified to use standard output by default. As a result, parse_file() no longer fails with undefined output. BZ# 825713 Prior to this update, the find2perl utility incorrectly translated global expressions that contained the question mark ("?") character. Consequently, Perl code matched different expressions than the 'find' command-line utility. With this update, the global expression translator has been modified and find2perl now matches the same glob expressions as the 'find' utility does. BZ# 839788 Exiting scope of an object whose destructor method has been declared but not yet defined caused the Perl interpreter to terminate unexpectedly. This bug has been fixed and the interpreter now handles the undefined destructor methods as expected. BZ# 905482 When the XML-LibXSLT library was built without the libgdm-devel package installed on the system, it was unable to link to other libraries. With this update, the glibc-devel, gdbm-devel, and db4-devel packages have been added to the perl-devel list of run-time dependencies. As a result, it is now possible to build native Perl libraries without complications. BZ# 920132 While executing Perl code with the "format" option in a prototyped subroutine, the Perl interpreter terminated unexpectedly with a segmentation fault. With this update, various back-ported fixes have been added to the perl package. As a result, it is now possible to use formats in prototyped subroutines without complications. BZ# 973022 Prior to this update, the XML::Simple::XMLin() parser did not process input from the Getopt::Long::GetOptions() handler. Consequently, XML::Simple::XMLin() reported an unsupported method. With this update, Getopt::Long::GetOptions() has been modified to produce a simple string output that other Perl modules can read without complications. BZ# 991852 After installing a custom signal handler, the perl script attempted to access the thread-specific interpreter structure. This structure has already been disabled and Perl terminated with a segmentation fault. This bug has been fixed and Perl scripts no longer ask for the interpreter structure. As a result, Perl no longer crashes in the aforementioned scenario. Enhancement BZ# 985791 This update adds the CGI.pm module to the list of perl-core dependences. CGI.pm is now installed along with the perl-core package. Users of perl are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/perl |
Chapter 1. New features and enhancements | Chapter 1. New features and enhancements 1.1. Migration Migration tools You can choose any one of the following tools to upgrade and migrate your JBoss EAP XP 2.0.0 product to the JBoss EAP XP 3.0.0 product: JBoss Server Migration Tool Migration Toolkit for Applications (MTA) You cannot use the JBoss EAP XP manager to upgrade and migrate your JBoss EAP XP 2.0.0 product to the JBoss EAP XP 3.0.0 product. Additional resources For more information about the JBoss Server Migration Tool, see Use the JBoss Server Migration Tool to migrate your server configurations in the JBoss EAP XP Migration Guide . For more information about the Migration Toolkit for Applications, see Use the Migration Toolkit for Applications to analyze applications for migration in the JBoss EAP XP Migration Guide . Name change for a configuration element For JBoss EAP XP 3.0.0, the extraServerContentDirs configuration element replaces the extraServerContent configuration element. This replacement aligns with the pre-existing extra-server-content-dirs element. If you used the extraServerContent element in your JBoss EAP Maven plug-in configuration, you must replace this element with the extraServerContentDirs element. If you used the extra-server-content-dirs element then you do not need to make any configuration changes. Additional resources For more information about the extra-server-content-dirs configuration element, see Enabling HTTP authentication for bootable JAR with a CLI script in the Using MicroProfile with JBoss EAP XP 3.0.0 guide. 1.2. MicroProfile Support for MicroProfile 4.0 JBoss EAP XP 3.0 is compatible with MicroProfile 4.0 specifications. Support for MicroProfile Config 2.0 JBoss EAP supports MicroProfile Config 2.0, which is part of MicroProfile 4.0. This Config interface introduces new methods. For more information about the changes, see Release Notes for MicroProfile Config 2.0 . Support for MicroProfile Metrics 3.0 JBoss EAP supports MicroProfile Metrics 3.0, which is part of MicroProfile 4.0. The breaking changes of the new release include the following : Removed everything related to reusability from the API code. All metrics are now considered reusable. Changed metric registration. The CDI producers annotated with @Metric no longer trigger metric registration. You must use the MetricRegistry methods for registering a metric. Changed MetricRegistry from abstract class to interface. For a complete list of changes, see Changes in 3.0 . Support for MicroProfile Health 3.0 JBoss EAP supports MicroProfile Health 3.0, which is part of MicroProfile 4.0. The major changes are the following: Pruned @Health qualifier Fixed HealthCheckResponse deserialization issue This component upgrade also covers the upgrade of smallrye-health 3.0.0 that implements MicroProfile Health 3.0. For more information, see Release Notes for MicroProfile Health 3.0 . Support for MicroProfile OpenTracing 2.0 JBoss EAP supports MicroProfile OpenTracing 2.0, which is part of MicroProfile 4.0. The new release removes the following APIs: Scope = ScopeManager.active() Scope = ScopeManager.activate (Span, boolean) Span = Scope.span() Scope = SpanBuilder.startActive() Span = Tracer.startManual() AutoFinishScopeManager For more information, see Release 2.0 . Support for MicroProfile Fault Tolerance 3.0 JBoss EAP supports MicroProfile Fault Tolerance 3.0, which is part of MicroProfile 4.0. The new release has the following breaking changes: Metric names and scopes changed. MicroProfile Metrics 2.0 added metric tags, and as a result, some information, previously included in the metric name, is now included in tags. Life cycle of circuit breakers and bulkheads is specified. The circuit breakers and bulkheads hold state between invocations, so their life cycle is important for correct functioning. For more information, see Release Notes for MicroProfile Fault Tolerance 3.0 . 1.3. Bootable JAR Ability to update the server configuration of a bootable JAR file at runtime You can now update the server configuration of a bootable JAR file at runtime using the --cli-script=<path to CLI script> argument. In the argument, <path to CLI script> means the path to a JBoss CLI script, a text file in Unicode Transformation Format 8-bit (UTF-8), to execute when starting the bootable JAR. This new functionality has the following caveats: If you perform any operation that requires a server restart, the bootable JAR server exits, which is the normal behavior of a bootable JAR restart. You cannot execute the following JBoss CLI commands at runtime: connect , reload , shutdown , jdbc-driver-info , and any command related to embedded server and patch . Ability to upgrade bootable JAR server components You can upgrade the following server components present in a bootable JAR when building the JAR file from the bootable JAR maven plugin: The JAR files for JBoss Modules module, such as undertow-core . EAP 7.4.x Galleon feature-pack , which is a dependency of the XP 3.0.x Galleon feature-pack . 1.4. Quickstarts OpenShift quickstarts Quickstarts released in JBoss EAP XP 1.0.0 to support OpenShift were Tech Preview. As of JBoss EAP XP 3.0.0, these quickstarts are fully supported. MicroProfile quickstarts for the bootable JAR JBoss EAP XP 3.0.0 provides MicroProfile quickstarts that you can use to understand the bootable JAR feature. Each quickstart provides a small, specific, working bootable JAR example. Use the quickstarts to run and test bootable JAR examples on your chosen platform. Note MicroProfile quickstarts cannot be used to build and test a hollow bootable JAR. Use the following MicroProfile quickstarts to test the bootable JAR on either a bare-metal platform or an OpenShift platform: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile JWT MicroProfile Metrics MicroProfile OpenAPI MicroProfile OpenTracing MicroProfile REST Client Quickstart for MicroProfile Reactive Messaging 1.0 JBoss EAP XP 3.0.0 provides a new quickstart and guide for MicroProfile Reactive Messaging 1.0 that describes the basic functionalities. You can use in-memory streams and streams backed by the Apache Kafka platform. If you are using a bare metal system, you can use the Docker platform to access Apache Kafka functionalities. On OpenShift, you can access Apache Kafka functionalities using the AMQ Streams operator. 1.5. Technology preview features MicroProfile Reactive Messaging 1.0 for AMQ Streams integration JBoss EAP XP now supports MicroProfile Reactive Messaging 1.0. You can use the MicroProfile Reactive Messaging 1.0 APIs to interact with AMQ Streams 2021.Q2. That means, with JBoss EAP XP working as a message relayer, you can consume, process, and produce messages within your application. This technology preview functionality is available on OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_3.0.0_release_notes/new_features_and_enhancements |
Chapter 8. Configuring IP tunnels | Chapter 8. Configuring IP tunnels Similar to a VPN, an IP tunnel directly connects two networks over a third network, such as the internet. However, not all tunnel protocols support encryption. The routers in both networks that establish the tunnel requires at least two interfaces: One interface that is connected to the local network One interface that is connected to the network through which the tunnel is established. To establish the tunnel, you create a virtual interface on both routers with an IP address from the remote subnet. NetworkManager supports the following IP tunnels: Generic Routing Encapsulation (GRE) Generic Routing Encapsulation over IPv6 (IP6GRE) Generic Routing Encapsulation Terminal Access Point (GRETAP) Generic Routing Encapsulation Terminal Access Point over IPv6 (IP6GRETAP) IPv4 over IPv4 (IPIP) IPv4 over IPv6 (IPIP6) IPv6 over IPv6 (IP6IP6) Simple Internet Transition (SIT) Depending on the type, these tunnels act either on layer 2 or 3 of the Open Systems Interconnection (OSI) model. 8.1. Configuring an IPIP tunnel to encapsulate IPv4 traffic in IPv4 packets An IP over IP (IPIP) tunnel operates on OSI layer 3 and encapsulates IPv4 traffic in IPv4 packets as described in RFC 2003 . Important Data sent through an IPIP tunnel is not encrypted. For security reasons, use the tunnel only for data that is already encrypted, for example, by other protocols, such as HTTPS. Note that IPIP tunnels support only unicast packets. If you require an IPv4 tunnel that supports multicast, see Configuring a GRE tunnel to encapsulate layer-3 traffic in IPv4 packets . For example, you can create an IPIP tunnel between two RHEL routers to connect two internal subnets over the internet as shown in the following diagram: Prerequisites Each RHEL router has a network interface that is connected to its local subnet. Each RHEL router has a network interface that is connected to the internet. The traffic you want to send through the tunnel is IPv4 unicast. Procedure On the RHEL router in network A: Create an IPIP tunnel interface named tun0 : The remote and local parameters set the public IP addresses of the remote and the local routers. Set the IPv4 address to the tun0 device: Note that a /30 subnet with two usable IP addresses is sufficient for the tunnel. Configure the tun0 connection to use a manual IPv4 configuration: Add a static route that routes traffic to the 172.16.0.0/24 network to the tunnel IP on router B: Enable the tun0 connection. Enable packet forwarding: On the RHEL router in network B: Create an IPIP tunnel interface named tun0 : The remote and local parameters set the public IP addresses of the remote and local routers. Set the IPv4 address to the tun0 device: Configure the tun0 connection to use a manual IPv4 configuration: Add a static route that routes traffic to the 192.0.2.0/24 network to the tunnel IP on router A: Enable the tun0 connection. Enable packet forwarding: Verification From each RHEL router, ping the IP address of the internal interface of the other router: On Router A, ping 172.16.0.1 : On Router B, ping 192.0.2.1 : 8.2. Configuring a GRE tunnel to encapsulate layer-3 traffic in IPv4 packets A Generic Routing Encapsulation (GRE) tunnel encapsulates layer-3 traffic in IPv4 packets as described in RFC 2784 . A GRE tunnel can encapsulate any layer 3 protocol with a valid Ethernet type. Important Data sent through a GRE tunnel is not encrypted. For security reasons, use the tunnel only for data that is already encrypted, for example, by other protocols, such as HTTPS. For example, you can create a GRE tunnel between two RHEL routers to connect two internal subnets over the internet as shown in the following diagram: Prerequisites Each RHEL router has a network interface that is connected to its local subnet. Each RHEL router has a network interface that is connected to the internet. Procedure On the RHEL router in network A: Create a GRE tunnel interface named gre1 : The remote and local parameters set the public IP addresses of the remote and the local routers. Important The gre0 device name is reserved. Use gre1 or a different name for the device. Set the IPv4 address to the gre1 device: Note that a /30 subnet with two usable IP addresses is sufficient for the tunnel. Configure the gre1 connection to use a manual IPv4 configuration: Add a static route that routes traffic to the 172.16.0.0/24 network to the tunnel IP on router B: Enable the gre1 connection. Enable packet forwarding: On the RHEL router in network B: Create a GRE tunnel interface named gre1 : The remote and local parameters set the public IP addresses of the remote and the local routers. Set the IPv4 address to the gre1 device: Configure the gre1 connection to use a manual IPv4 configuration: Add a static route that routes traffic to the 192.0.2.0/24 network to the tunnel IP on router A: Enable the gre1 connection. Enable packet forwarding: Verification From each RHEL router, ping the IP address of the internal interface of the other router: On Router A, ping 172.16.0.1 : On Router B, ping 192.0.2.1 : 8.3. Configuring a GRETAP tunnel to transfer Ethernet frames over IPv4 A Generic Routing Encapsulation Terminal Access Point (GRETAP) tunnel operates on OSI level 2 and encapsulates Ethernet traffic in IPv4 packets as described in RFC 2784 . Important Data sent through a GRETAP tunnel is not encrypted. For security reasons, establish the tunnel over a VPN or a different encrypted connection. For example, you can create a GRETAP tunnel between two RHEL routers to connect two networks using a bridge as shown in the following diagram: Prerequisites Each RHEL router has a network interface that is connected to its local network, and the interface has no IP configuration assigned. Each RHEL router has a network interface that is connected to the internet. Procedure On the RHEL router in network A: Create a bridge interface named bridge0 : Configure the IP settings of the bridge: Add a new connection profile for the interface that is connected to local network to the bridge: Add a new connection profile for the GRETAP tunnel interface to the bridge: The remote and local parameters set the public IP addresses of the remote and the local routers. Important The gretap0 device name is reserved. Use gretap1 or a different name for the device. Optional: Disable the Spanning Tree Protocol (STP) if you do not need it: By default, STP is enabled and causes a delay before you can use the connection. Configure that activating the bridge0 connection automatically activates the ports of the bridge: Active the bridge0 connection: On the RHEL router in network B: Create a bridge interface named bridge0 : Configure the IP settings of the bridge: Add a new connection profile for the interface that is connected to local network to the bridge: Add a new connection profile for the GRETAP tunnel interface to the bridge: The remote and local parameters set the public IP addresses of the remote and the local routers. Optional: Disable the Spanning Tree Protocol (STP) if you do not need it: Configure that activating the bridge0 connection automatically activates the ports of the bridge: Active the bridge0 connection: Verification On both routers, verify that the enp1s0 and gretap1 connections are connected and that the CONNECTION column displays the connection name of the port: From each RHEL router, ping the IP address of the internal interface of the other router: On Router A, ping 192.0.2.2 : On Router B, ping 192.0.2.1 : | [
"nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 198.51.100.5 local 203.0.113.10",
"nmcli connection modify tun0 ipv4.addresses '10.0.1.1/30'",
"nmcli connection modify tun0 ipv4.method manual",
"nmcli connection modify tun0 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"",
"nmcli connection up tun0",
"echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf",
"nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 203.0.113.10 local 198.51.100.5",
"nmcli connection modify tun0 ipv4.addresses '10.0.1.2/30'",
"nmcli connection modify tun0 ipv4.method manual",
"nmcli connection modify tun0 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"",
"nmcli connection up tun0",
"echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf",
"ping 172.16.0.1",
"ping 192.0.2.1",
"nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 198.51.100.5 local 203.0.113.10",
"nmcli connection modify gre1 ipv4.addresses '10.0.1.1/30'",
"nmcli connection modify gre1 ipv4.method manual",
"nmcli connection modify gre1 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"",
"nmcli connection up gre1",
"echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf",
"nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 203.0.113.10 local 198.51.100.5",
"nmcli connection modify gre1 ipv4.addresses '10.0.1.2/30'",
"nmcli connection modify gre1 ipv4.method manual",
"nmcli connection modify gre1 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"",
"nmcli connection up gre1",
"echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf",
"ping 172.16.0.1",
"ping 192.0.2.1",
"nmcli connection add type bridge con-name bridge0 ifname bridge0",
"nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' nmcli connection modify bridge0 ipv4.method manual",
"nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp1s0 master bridge0",
"nmcli connection add type ip-tunnel ip-tunnel.mode gretap slave-type bridge con-name bridge0-port2 ifname gretap1 remote 198.51.100.5 local 203.0.113.10 master bridge0",
"nmcli connection modify bridge0 bridge.stp no",
"nmcli connection modify bridge0 connection.autoconnect-slaves 1",
"nmcli connection up bridge0",
"nmcli connection add type bridge con-name bridge0 ifname bridge0",
"nmcli connection modify bridge0 ipv4.addresses '192.0.2.2/24' nmcli connection modify bridge0 ipv4.method manual",
"nmcli connection add type ethernet slave-type bridge con-name bridge0-port1 ifname enp1s0 master bridge0",
"nmcli connection add type ip-tunnel ip-tunnel.mode gretap slave-type bridge con-name bridge0-port2 ifname gretap1 remote 203.0.113.10 local 198.51.100.5 master bridge0",
"nmcli connection modify bridge0 bridge.stp no",
"nmcli connection modify bridge0 connection.autoconnect-slaves 1",
"nmcli connection up bridge0",
"nmcli device nmcli device DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0 enp1s0 ethernet connected bridge0-port1 gretap1 iptunnel connected bridge0-port2",
"ping 192.0.2.2",
"ping 192.0.2.1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-ip-tunnels_configuring-and-managing-networking |
13.3. Setting up Specific Jobs | 13.3. Setting up Specific Jobs Automated jobs can be configured through the Certificate Manager Console or by editing the configuration file directory. It is recommended that these changes be made through the Certificate Manager Console. 13.3.1. Configuring Specific Jobs Using the Certificate Manager Console Note pkiconsole is being deprecated. To enable and configure an automated job using the Certificate Manager Console: Open the Certificate Manager Console. Confirm that the Jobs Scheduler is enabled. See Section 13.2, "Setting up the Job Scheduler" for more information. In the Configuration tab, select Job Scheduler from the navigation tree. Then select Jobs to open the Job Instance tab. Select the job instance from the list, and click Edit/View . The Job Instance Editor opens, showing the current job configuration. Figure 13.1. Job Configuration Select enabled to turn on the job. Set the configuration settings by specifying them in the fields for this dialog. For certRenewalNotifier , see Section 13.3.3, "Configuration Parameters of certRenewalNotifier" . For requestInQueueNotifier , see Section 13.3.4, "Configuration Parameters of requestInQueueNotifier" . For publishCerts , see Section 13.3.5, "Configuration Parameters of publishCerts" . For unpublishExpiredCerts , see Section 13.3.6, "Configuration Parameters of unpublishExpiredCerts" . For more information about setting the cron time frequencies, see Section 13.3.7, "Frequency Settings for Automated Jobs" . Click OK . Click Refresh to view any changes in the main window. If the job is configured to send automatic messages, check that a mail server is set up correctly. See Section 12.4, "Configuring a Mail Server for Certificate System Notifications" . Customize the email message text and appearance. 13.3.2. Configuring Jobs by Editing the Configuration File Ensure that the Jobs Scheduler is enabled and configured; see Section 13.2, "Setting up the Job Scheduler" . Stop the CA subsystem instance. Open the CS.cfg file for that server instance in a text editor. Edit all of the configuration parameters for the job module being configured. To configure the certRenewalNotifier job, edit all parameters that begin with jobsScheduler.job.certRenewalNotifier ; see Section 13.3.3, "Configuration Parameters of certRenewalNotifier" . To configure the requestInQueueNotifier job, edit all parameters that begin with jobsScheduler.job.requestInQueueNotifier ; see Section 13.3.4, "Configuration Parameters of requestInQueueNotifier" . To configure the publishCerts job, edit all parameters that begin with jobsScheduler.job.publishCerts ; see Section 13.3.5, "Configuration Parameters of publishCerts" . To configure the unpublishExpiredCerts job, edit all parameters that begin with jobsScheduler.job.unpublishExpiredCerts ; see Section 13.3.6, "Configuration Parameters of unpublishExpiredCerts" . Save the file. Restart the server instance. If the job will send automated messages, check that the mail server is set up correctly. See Section 12.4, "Configuring a Mail Server for Certificate System Notifications" . Customize the automatic job messages. 13.3.3. Configuration Parameters of certRenewalNotifier Table 13.1, "certRenewalNotifier Parameters" gives details for each of these parameters that can be configured for the certRenewalNotifier job, either in the CS.cfg file or in the Certificate Manager Console. Table 13.1. certRenewalNotifier Parameters Parameter Description enabled Specifies whether the job is enabled or disabled. The value true enables the job; false disables it. cron Sets the schedule when this job should be run. This sets the time at which the Job Scheduler daemon thread checks the certificates for sending renewal notifications. These settings must follow the conventions in Section 13.3.7, "Frequency Settings for Automated Jobs" . For example: The job in the example is run Monday through Friday at 3:00 pm. notifyTriggerOffset Sets how long (in days) before the certificate expiration date the first notification will be sent. notifyEndOffset Sets how long (in days) after the certificate expires that notifications will continue to be sent if the certificate is not replaced. senderEmail Sets the sender of the notification messages, who will be notified of any delivery problems. emailSubject Sets the text of the subject line of the notification message. emailTemplate Sets the path, including the filename, to the directory that contains the template to use to create the message content. summary.enabled Sets whether a summary report of renewal notifications should be compiled and sent. The value true enables sending the summary; false disables it. If enabled, set the remaining summary parameters; these are required by the server to send the summary report. summary.recipientEmail Specifies the recipients of the summary message. These can be agents who need to know the status of user certificates or other users. Set more than one recipient by separating each email address with a comma. summary.senderEmail Specifies the email address of the sender of the summary message. summary.emailSubject Gives the subject line of the summary message. summary.itemTemplate Gives the path, including the filename, to the directory that contains the template to use to create the content and format of each item to be collected for the summary report. summary.emailTemplate Gives the path, including the filename, to the directory that contains the template to use to create the summary report email notification. 13.3.4. Configuration Parameters of requestInQueueNotifier Table 13.2, "requestInQueueNotifier Parameters" gives details for each of these parameters that can be configured for the requestInQueueNotifier job, either in the CS.cfg file or in the Certificate Manager Console. Table 13.2. requestInQueueNotifier Parameters Parameter Description enabled Sets whether the job is enabled ( true ) or disabled ( false ). cron Sets the time schedule for when the job should run. This is the time at which the Job Scheduler daemon thread checks the queue for pending requests. This setting must follow the conventions in Section 13.3.7, "Frequency Settings for Automated Jobs" . For example: subsystemid Specifies the subsystem which is running the job. The only possible value is ca , for the Certificate Manager. summary.enabled Specifies whether a summary of the job accomplished should be compiled and sent. The value true enables the summary reports; false disables them. If enabled, set the remaining summary parameters; these are required by the server to send the summary report. summary.emailSubject Sets the subject line of the summary message. summary.emailTemplate Specifies the path, including the filename, to the directory containing the template to use to create the summary report. summary.senderEmail Specifies the sender of the notification message, who will be notified of any delivery problems. summary.recipientEmail Specifies the recipients of the summary message. These can be agents who need to process pending requests or other users. More than one recipient can be listed by separating each email address with a comma. 13.3.5. Configuration Parameters of publishCerts Table 13.3, "publishCerts Parameters" gives details for each of these parameters that can be configured for the publishCerts job, either in the CS.cfg file or in the Certificate Manager Console. Table 13.3. publishCerts Parameters Parameter Description enabled Sets whether the job is enabled. The value true is enabled; false is disabled. cron Sets the time schedule for when the job runs. This is the time the Job Scheduler daemon thread checks the certificates to removing expired certificates from the publishing directory. This setting must follow the conventions in Section 13.3.7, "Frequency Settings for Automated Jobs" . For example: summary.enabled Specifies whether a summary of the certificates published by the job should be compiled and sent. The value true enables the summaries; false disables them. If enabled, set the remaining summary parameters; these are required by the server to send the summary report. summary.emailSubject Gives the subject line of the summary message. summary.emailTemplate Specifies the path, including the filename, to the directory containing the template to use to create the summary report. summary.itemTemplate Specifies the path, including the filename, to the directory containing the template to use to create the content and format of each item collected for the summary report. summary.senderEmail Specifies the sender of the summary message, who will be notified of any delivery problems. summary.recipientEmail Specifies the recipients of the summary message. These can be agents who need to know the status of user certificates or other users. More than one recipient can be set by separating each email address with a comma. 13.3.6. Configuration Parameters of unpublishExpiredCerts Table 13.4, "unpublishExpiredCerts Parameters" gives details for each of these parameters that can be configured for the unpublishedExpiresCerts job, either in the CS.cfg file or in the Certificate Manager Console. Table 13.4. unpublishExpiredCerts Parameters Parameter Description enabled Sets whether the job is enabled. The value true is enabled; false is disabled. cron Sets the time schedule for when the job runs. This is the time the Job Scheduler daemon thread checks the certificates to removing expired certificates from the publishing directory. This setting must follow the conventions in Section 13.3.7, "Frequency Settings for Automated Jobs" . For example: summary.enabled Specifies whether a summary of the certificates published by the job should be compiled and sent. The value true enables the summaries; false disables them. If enabled, set the remaining summary parameters; these are required by the server to send the summary report. summary.emailSubject Gives the subject line of the summary message. summary.emailTemplate Specifies the path, including the filename, to the directory containing the template to use to create the summary report. summary.itemTemplate Specifies the path, including the filename, to the directory containing the template to use to create the content and format of each item collected for the summary report. summary.senderEmail Specifies the sender of the summary message, who will be notified of any delivery problems. summary.recipientEmail Specifies the recipients of the summary message. These can be agents who need to know the status of user certificates or other users. More than one recipient can be set by separating each email address with a comma. 13.3.7. Frequency Settings for Automated Jobs The Job Scheduler uses a variation of the Unix crontab entry format to specify dates and times for checking the job queue and executing jobs. As shown in Table 13.5, "Time Values for Scheduling Jobs" and Figure 13.1, "Job Configuration" , the time entry format consists of five fields. (The sixth field specified for the Unix crontab is not used by the Job Scheduler.) Values are separated by spaces or tabs. Each field can contain either a single integer or a pair of integers separated by a hyphen ( - ) to indicate an inclusive range. To specify all legal values, a field can contain an asterisk rather than an integer. Day fields can contain a comma-separated list of values. The syntax of this expression is Table 13.5. Time Values for Scheduling Jobs Field Value Minute 0-59 Hour 0-23 Day of month 1-31 Month of year 1-12 Day of week 0-6 (where 0=Sunday) For example, the following time entry specifies every hour at 15 minutes (1:15, 2:15, 3:15, and so on): The following example sets a job to run at noon on April 12: The day-of-month and day-of-week options can contain a comma-separated list of values to specify more than one day. If both day fields are specified, the specification is inclusive; that is, the day of the month is not required to fall on the day of the week to be valid. For example, the following entry specifies a job execution time of midnight on the first and fifteenth of every month and on every Monday: To specify one day type without the other, use an asterisk in the other day field. For example, the following entry runs the job at 3:15 a.m. every weekday morning: | [
"pkiconsole https://server.example.com:8443/ca",
"pki-server stop instance_name",
"pki-server start instance_name",
"0 3 * * 1-5",
"0 0 * * 0",
"0 0 * * 6",
"0 0 * * 6",
"Minute Hour Day_of_month Month_of_year Day_of_week",
"15 * * * *",
"0 12 12 4 *",
"0 0 1,15 * 1",
"15 3 * * 1-5"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Setting_up_Specific_Jobs |
Chapter 14. Access Control Lists | Chapter 14. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux 4 kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 14.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: For example: Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 14.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file. | [
"mount -t ext3 -o acl <device-name> <partition>",
"mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work",
"LABEL=/work /work ext3 acl 1 2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/access_control_lists |
17.3. Trouble During the Installation | 17.3. Trouble During the Installation 17.3.1. The " No devices found to install Red Hat Enterprise Linux " Error Message If you receive an error message stating No devices found to install Red Hat Enterprise Linux , there is probably a SCSI controller that is not being recognized by the installation program. Check your hardware vendor's website to determine if a driver disk image is available that fixes your problem. For more general information on driver disks, refer to Chapter 13, Updating Drivers During Installation on IBM Power Systems Servers . You can also refer to the Red Hat Hardware Compatibility List , available online at: https://hardware.redhat.com/ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-trouble-install-ppc |
10.5. JSON Representation of a Cluster | 10.5. JSON Representation of a Cluster Example 10.2. A JSON representation of a cluster | [
"{ \"cluster\" : [ { \"cpu\" : { \"architecture\" : \"X86_64\", \"id\" : \"Intel Penryn Family\" }, \"data_center\" : { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255\", \"id\" : \"00000002-0002-0002-0002-000000000255\" }, \"memory_policy\" : { \"overcommit\" : { \"percent\" : \"100\" }, \"transparent_hugepages\" : { \"enabled\" : \"true\" } }, \"scheduling_policy\" : { \"policy\" : \"none\", \"name\" : \"none\", \"href\" : \"/ovirt-engine/api/schedulingpolicies/b4ed2332-a7ac-4d5f-9596-99a439cb2812\", \"id\" : \"b4ed2332-a7ac-4d5f-9596-99a439cb2812\" }, \"version\" : { \"major\" : \"4\", \"minor\" : \"0\" }, \"error_handling\" : { \"on_error\" : \"migrate\" }, \"virt_service\" : \"true\", \"gluster_service\" : \"false\", \"threads_as_cores\" : \"false\", \"tunnel_migration\" : \"false\", \"trusted_service\" : \"false\", \"ha_reservation\" : \"false\", \"optional_reason\" : \"false\", \"ballooning_enabled\" : \"false\", \"ksm\" : { \"enabled\" : \"true\" }, \"required_rng_sources\" : { }, \"name\" : \"Default\", \"description\" : \"The default server cluster\", \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb\", \"id\" : \"00000001-0001-0001-0001-0000000002fb\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/networks\", \"rel\" : \"networks\" }, { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/glustervolumes\", \"rel\" : \"glustervolumes\" }, { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/glusterhooks\", \"rel\" : \"glusterhooks\" }, { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/affinitygroups\", \"rel\" : \"affinitygroups\" }, { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb/cpuprofiles\", \"rel\" : \"cpuprofiles\" } ] } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/json_representation_of_a_cluster |
Chapter 10. ImageTag [image.openshift.io/v1] | Chapter 10. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec status image 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. status object NamedTagEventList relates a tag to its image history. 10.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 10.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 10.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 10.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 10.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 10.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 10.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 10.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 10.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 10.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 10.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 10.1.12. .spec Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 10.1.13. .spec.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 10.1.14. .spec.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 10.1.15. .status Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 10.1.16. .status.conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 10.1.17. .status.conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 10.1.18. .status.items Description Standard object's metadata. Type array 10.1.19. .status.items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 10.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagetags GET : list objects of kind ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags GET : list objects of kind ImageTag POST : create an ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} DELETE : delete an ImageTag GET : read the specified ImageTag PATCH : partially update the specified ImageTag PUT : replace the specified ImageTag 10.2.1. /apis/image.openshift.io/v1/imagetags Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ImageTag Table 10.2. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty 10.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind ImageTag Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTag Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.8. Body parameters Parameter Type Description body ImageTag schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 202 - Accepted ImageTag schema 401 - Unauthorized Empty 10.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the ImageTag namespace string object name and auth scope, such as for teams and projects Table 10.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageTag Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.13. Body parameters Parameter Type Description body DeleteOptions schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTag Table 10.15. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTag Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.17. Body parameters Parameter Type Description body Patch schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTag Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body ImageTag schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/image_apis/imagetag-image-openshift-io-v1 |
probe::scheduler.signal_send | probe::scheduler.signal_send Name probe::scheduler.signal_send - Sending a signal Synopsis scheduler.signal_send Values pid pid of the process sending signal name name of the probe point signal_number signal number | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-signal-send |
Configuring the Compute service for instance creation | Configuring the Compute service for instance creation Red Hat OpenStack Platform 17.1 Configuring and managing the Red Hat OpenStack Platform Compute service (nova) for creating instances OpenStack Documentation Team [email protected] | [
"parameter_defaults: NovaNfsEnabled: true NovaNfsOptions: \"context=system_u:object_r:nfs_t:s0\" NovaNfsShare: \"192.0.2.254:/export/nova\" NovaNfsVersion: \"4.2\" NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter",
"parameter_defaults: ComputeExtraConfig: nova::compute::force_raw_images: True",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/timeout_nbd: value: '20'",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_vcpus> [--private --project <project_id>] <flavor_name>",
"(overcloud)USD openstack flavor set --property <key=value> --property <key=value> ... <flavor_name>",
"(overcloud)USD openstack flavor set --property hw:cpu_sockets=2 --property hw:cpu_cores=2 processor_topology_flavor",
"openstack flavor set cpu_limits_flavor --property quota:cpu_quota=10000 --property quota:cpu_period=20000",
"--property trait:HW_CPU_HYPERTHREADING=forbidden",
"--property trait:HW_CPU_HYPERTHREADING=required",
"openstack flavor set numa_top_flavor --property hw:numa_nodes=2 --property hw:numa_cpus.0=0,1,2,3,4,5 --property hw:numa_cpus.1=6,7 --property hw:numa_mem.0=3072 --property hw:numa_mem.1=1024",
"openstack flavor set <flavor> --property hw:cpu_realtime=\"yes\" --property hw:cpu_realtime_mask=^0-1",
"--property hw:cpu_policy=dedicated",
"<alias>:<count>",
"openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required avx512-flavor",
"openstack flavor set --property resources:CUSTOM_BAREMETAL_SMALL=1 --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-small",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_cpu_pinning.yaml Compute:ComputeCPUPinning Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.CPU-PINNING <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputeCPUPinning count: 1 defaults: resource_class: baremetal.CPU-PINNING network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1",
"(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeCPUPinningNetworkConfigTemplate: /home/stack/templates/nic-configs/<cpu_pinning_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuDedicatedSet: 1,3,5,7",
"parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuSharedSet: 2,6",
"parameter_defaults: ComputeCPUPinningParameters: NovaReservedHugePages: <ram>",
"parameter_defaults: ComputeCPUPinningParameters: IsolCpusList: 1-3,5-7",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_cpu_pinning.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/cpu_pinning.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> pinned_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated pinned_cpus",
"(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_thread_policy=require pinned_cpus",
"(overcloud)USD openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=shared floating_cpus",
"(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <number_of_reserved_vcpus> --property hw:cpu_policy=mixed mixed_CPUs_flavor",
"(overcloud)USD openstack flavor set --property hw:cpu_dedicated_mask=<CPU_number> mixed_CPUs_flavor",
"(overcloud)USD openstack flavor set --property hw:mem_page_size=<page_size> pinned_cpus",
"grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u",
"/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3",
"parameter_defaults: NovaComputeCpuDedicatedSet: 2-15,18-31",
"parameter_defaults: NovaComputeCpuSharedSet: 0,1,16,17",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=share dedicated_emulator_threads",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: <cpu_mode>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: <cpu_model>",
"NovaLibvirtCPUModels: - SandyBridge - IvyBridge - Haswell-noTSX-IBRS",
"sudo podman exec -it nova_libvirt virsh cpu-models <arch>",
"sudo podman exec -it nova_virtqemud virsh cpu-models <arch>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUModelExtraFlags: <cpu_feature_flags>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: - IvyBridge - Cascadelake-Server NovaLibvirtCPUModelExtraFlags: 'pcid,+ssbd,-mtrr'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yaml",
"parameter_defaults: swap_size_megabytes: <swap size in MB> swap_path: <full path to location of swap, default: /swap>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/enable-swap.yaml",
"NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))",
"parameter_defaults: ComputeParameters: NovaReservedHugePages: [\"node:0,size:1GB,count:1\",\"node:1,size:1GB,count:1\"]",
"parameter_defaults: ComputeParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32\"",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> huge_pages",
"openstack flavor set huge_pages --property hw:mem_page_size=<page_size>",
"openstack server create --flavor huge_pages --image <image> huge_pages_instance",
"heat_template_version: <version> description: > Huge pages configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: hugepages_config} hugepages_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash hostname | grep -qiE 'co?mp' || exit 0 systemctl mask dev-hugepages.mount || true for pagesize in 2M 1G;do if ! [ -d \"/dev/hugepagesUSD{pagesize}\" ]; then mkdir -p \"/dev/hugepagesUSD{pagesize}\" cat << EOF > /etc/systemd/system/dev-hugepagesUSD{pagesize}.mount [Unit] Description=USD{pagesize} Huge Pages File System Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems DefaultDependencies=no Before=sysinit.target ConditionPathExists=/sys/kernel/mm/hugepages ConditionCapability=CAP_SYS_ADMIN ConditionVirtualization=!private-users [Mount] What=hugetlbfs Where=/dev/hugepagesUSD{pagesize} Type=hugetlbfs Options=pagesize=USD{pagesize} [Install] WantedBy = sysinit.target EOF fi done systemctl daemon-reload for pagesize in 2M 1G;do systemctl enable --now dev-hugepagesUSD{pagesize}.mount done outputs: OS::stack_id: value: {get_resource: userdata}",
"parameter_defaults NovaComputeOptVolumes: - /opt/dev:/opt/dev NovaLibvirtOptVolumes: - /opt/dev:/opt/dev",
"resource_registry: OS::TripleO::NodeUserData: ./hugepages.yaml",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/firstboot.yaml",
"parameter_defaults: NovaLibvirtFileBackedMemory: 102400",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"mkfs.ext4 /dev/sdb",
"mount /dev/sdb /var/lib/libvirt/qemu/ram",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_amd_sev.yaml Compute:ComputeAMDSEV Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.AMD-SEV <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputeAMDSEV count: 1 defaults: resource_class: baremetal.AMD-SEV network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1",
"(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<amd_sev_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"lscpu | grep sev",
"parameter_defaults: ComputeAMDSEVExtraConfig: nova::config::nova_config: libvirt/num_memory_encrypted_guests: value: 15",
"parameter_defaults: ComputeAMDSEVParameters: NovaHWMachineType: x86_64=q35",
"parameter_defaults: ComputeAMDSEVParameters: NovaReservedHostMemory: <libvirt/num_memory_encrypted_guests * 16>",
"parameter_defaults: ComputeAMDSEVParameters: KernelArgs: \"hugepagesz=1GB hugepages=32 default_hugepagesz=1GB mem_encrypt=on kvm_amd.sev=1\"",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_amd_sev.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/<compute_environment_file>.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"(overcloud)USD openstack image create ... --property hw_firmware_type=uefi amd-sev-image",
"(overcloud)USD openstack image set --property hw_mem_encryption=True amd-sev-image",
"(overcloud)USD openstack image set --property hw_machine_type=q35 amd-sev-image",
"(overcloud)USD openstack image set --property trait:HW_CPU_X86_AMD_SEV=required amd-sev-image",
"(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:mem_encryption=True m1.small-amd-sev",
"(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AMD_SEV=required m1.small-amd-sev",
"(overcloud)USD openstack server create --flavor m1.small-amd-sev --image amd-sev-image amd-sev-instance",
"dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active",
"source ~/stackrc",
"parameter_defaults: NovaMaxDiskDevicesToAttach: <max_device_limit>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<environment_file>.yaml",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: NovaNfsEnabled: True NovaNfsShare: <nfs_share>",
"parameter_defaults: NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>'",
"man 8 mount.",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/nfs_instance_disk_backend.yaml",
"parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False",
"parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False NovaGlanceRbdDownloadMultistoreID: <rbd_backend_id>",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: glance/rbd_user: value: 'glance' glance/rbd_pool: value: 'images' glance/rbd_ceph_conf: value: '/etc/ceph/ceph.conf' glance/rbd_connect_timeout: value: '5'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)USD openstack image create ... trait-image",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait list",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME",
"(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')",
"(overcloud)USD echo USDexisting_traits",
"(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>",
"(overcloud)USD openstack image set --property trait:HW_CPU_X86_AVX512BW=required trait-image",
"(overcloud)USD openstack image set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-image",
"(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 trait-flavor",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait list",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME",
"(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')",
"(overcloud)USD echo USDexisting_traits",
"(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>",
"(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required trait-flavor",
"(overcloud)USD openstack flavor set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-flavor",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait list",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME",
"(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')",
"(overcloud)USD echo USDexisting_traits",
"(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>",
"(overcloud)USD openstack --os-compute-api-version 2.53 aggregate set --property trait:<TRAIT_NAME>=required <aggregate_name>",
"(overcloud)USD openstack flavor set --property trait:<TRAIT_NAME>=required <flavor> (overcloud)USD openstack image set --property trait:<TRAIT_NAME>=required <image>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"parameter_defaults: NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter",
"parameter_defaults: ControllerExtraConfig: nova::scheduler::filter::ram_weight_multiplier: '2.0'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/isolated_hosts: value: server1, server2 filter_scheduler/isolated_images: value: 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/compute_monitors: value: 'cpu.virt_driver'",
"openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1",
"openstack server group create --policy affinity <group_name>",
"openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>",
"openstack server group create --policy anti-affinity <group_name>",
"openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>",
"openstack server create --image <image> --flavor <flavor> --hint build_near_host_ip=<ip_address> --hint cidr=<subnet_mask> <instance_name>",
"(node_resource_availability - minval) / (maxval - minval)",
"(w1_multiplier * norm(w1)) + (w2_multiplier * norm(w2)) +",
"ControllerExtraConfig: nova::scheduler::filter::scheduler_weight_classes: 'nova.scheduler.weights.ram.RAMWeigher' nova::scheduler::filter::ram_weight_multiplier: '2.0'",
"openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>",
"openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>",
"meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid>",
"meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: - CUSTOM_EXAMPLE_RESOURCE_CLASS: total: <total_available> reserved: <reserved> min_unit: <min_unit> max_unit: <max_unit> step_size: <step_size> allocation_ratio: <allocation_ratio>",
"meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: traits: additional: - 'CUSTOM_EXAMPLE_TRAIT'",
"meta: schema_version: 1.0 providers: - identification: uuid: USDCOMPUTE_NODE inventories: additional: CUSTOM_LLC: # Describing LLC on this Compute node total: 22 1 reserved: 2 2 min_unit: 1 3 max_unit: 11 4 step_size: 1 5 allocation_ratio: 1.0 6 traits: additional: # This Compute node enables support for P-state control - CUSTOM_P_STATE_ENABLED",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/provider.yaml",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_roles_data_custom_traits.yaml Compute:Compute Controller",
"########################## # GPU configuration # ########################## ComputeGpuParameters: NovaVGPUTypesDeviceAddressesMapping: {'nvidia-319': ['0000:82:00.0'], 'nvidia-320': ['0000:04:00.0']} CustomProviderInventories: - name: computegpu-0.localdomain_pci_0000_04_00_0 traits: - CUSTOM_NVIDIA_12 - name: computegpu-0.localdomain_pci_0000_82_00_0 traits: - CUSTOM_NVIDIA_11 - name: computegpu-1.localdomain_pci_0000_04_00_0 traits: - CUSTOM_NVIDIA_12 - name: computegpu-1.localdomain_pci_0000_82_00_0 traits: - CUSTOM_NVIDIA_11 - uuid: USDCOMPUTE_NODE inventories: CUSTOM_EXAMPLE_RESOURCE_CLASS: total: 100 1 reserved: 0 2 min_unit: 1 3 max_unit: 10 4 step_size: 1 5 allocation_ratio: 1.0 6 CUSTOM_ANOTHER_EXAMPLE_RESOURCE_CLASS: total: 100 traits: # This Compute node enables support for P-state and C-state control - CUSTOM_P_STATE_ENABLED - CUSTOM_C_STATE_ENABLED",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/roles_data_roles_data_custom_traits.yaml",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)# openstack aggregate create <aggregate_name>",
"(overcloud)# openstack aggregate set --property <key=value> --property <key=value> <aggregate_name>",
"(overcloud)# openstack aggregate add host <aggregate_name> <host_name>",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> host-agg-flavor",
"(overcloud)USD openstack image create host-agg-image",
"(overcloud)# openstack flavor set --property aggregate_instance_extra_specs:ssd=true host-agg-flavor",
"(overcloud)# openstack image set --property os_type=linux host-agg-image",
"(overcloud)# openstack aggregate create --zone <availability_zone> <aggregate_name>",
"(overcloud)# openstack aggregate set --zone <availability_zone> <aggregate_name>",
"(overcloud)# openstack aggregate set --property <key=value> <aggregate_name>",
"(overcloud)# openstack aggregate add host <aggregate_name> <host_name>",
"(overcloud)# openstack aggregate show <aggregate_name>",
"(overcloud)# openstack aggregate remove host <aggregate_name> <host_name>",
"(overcloud)# openstack aggregate delete <aggregate_name>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml \\",
"(overcloud)# openstack project list",
"(overcloud)# openstack aggregate set --property filter_tenant_id<ID0>=<project_id0> --property filter_tenant_id<ID1>=<project_id1> --property filter_tenant_id<IDn>=<project_idn> <aggregate_name>",
"(overcloud)# openstack aggregate set --property filter_tenant_id0=78f1 --property filter_tenant_id1=9d3t --property filter_tenant_id2=aa29 project-isolated-aggregate",
"(overcloud)# openstack aggregate set --property filter_tenant_id=78f1 single-project-isolated-aggregate",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_pci_passthrough.yaml Compute:ComputePCI Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.PCI-PASSTHROUGH <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputePCI count: 1 defaults: resource_class: baremetal.PCI-PASSTHROUGH network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1",
"(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputePCINetworkConfigTemplate: /home/stack/templates/nic-configs/<pci_passthrough_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"",
"parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\" numa_policy: \"preferred\"",
"parameter_defaults: ComputePCIParameters: NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1572\"",
"parameter_defaults: ComputePCIExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"",
"parameter_defaults: ComputePCIParameters: KernelArgs: \"intel_iommu=on iommu=pt \\ vfio-pci.ids=<pci_device_id> rd.driver.pre=vfio-pci\"",
"parameter_defaults: ComputePCIParameters: KernelArgs: \"amd_iommu=on iommu=pt\"",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_pci_passthrough.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/pci_passthrough_controller.yaml -e /home/stack/templates/pci_passthrough_compute.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"(overcloud)USD openstack flavor set --property \"pci_passthrough:alias\"=\"a1:2\" device_passthrough",
"(overcloud)USD openstack flavor set --property \"hw:pci_numa_affinity_policy\"=\"required\" device_passthrough",
"(overcloud)USD openstack image set --property hw_pci_numa_affinity_policy=required device_passthrough_image",
"openstack server create --flavor device_passthrough --image <image> --wait test-pci",
"lspci -nn | grep <device_name>",
"NovaPCIPassthrough: - address: \"*:0a:00.*\" physical_network: physnet1",
"NovaPCIPassthrough: - address: domain: \".*\" bus: \"02\" slot: \"01\" function: \"[0-2]\" physical_network: net1",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_vdpa.yaml ComputeVdpa Compute Controller",
"############################################################################### Role: ComputeVdpa # ############################################################################### - name: ComputeVdpa description: | VDPA Compute Node role CountDefault: 1 # Create external Neutron bridge tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computevdpa-%index%' deprecated_nic_config_name: compute-vdpa.yaml",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.VDPA <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputeVdpa count: 1 defaults: resource_class: baremetal.VDPA network_config: template: /home/stack/templates/nic-config/<role_topology_file>",
"- type: ovs_bridge name: br-tenant members: - type: sriov_pf name: enp6s0f0 numvfs: 8 use_dhcp: false vdpa: true link_mode: switchdev - type: sriov_pf name: enp6s0f1 numvfs: 8 use_dhcp: false vdpa: true link_mode: switchdev",
"(undercloud)USD openstack overcloud node provision [--stack <stack>] [--network-config \\] --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeVdpaNetworkConfigTemplate: /home/stack/templates/nic-configs/<role_topology_file> ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"parameter_defaults: NovaSchedulerEnabledFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']",
"parameter_defaults: ComputeVdpaParameters: NovaPCIPassthrough: - vendor_id: \"15b3\" product_id: \"101d\" address: \"06:00.0\" physical_network: \"tenant\" - vendor_id: \"15b3\" product_id: \"101d\" address: \"06:00.1\" physical_network: \"tenant\"",
"parameter_defaults: ComputeVdpaParameters: KernelArgs: \"intel_iommu=on iommu=pt\"",
"parameter_defaults: NeutronBridgeMappings: - <bridge_map_1> - <bridge_map_n> NeutronTunnelTypes: '<tunnel_types>' NeutronNetworkType: '<network_types>' NeutronNetworkVLANRanges: - <network_vlan_range_1> - <network_vlan_range_n>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_vdpa.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/vdpa_compute.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"openstack port show vdpa-port",
"sudo podman exec -it nova_libvirt virsh dumpxml <instance_name> | grep mdev",
"sudo podman exec -it nova_virtqemud virsh dumpxml <instance_name> | grep mdev",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_gpu.yaml Compute:ComputeGpu Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.GPU <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputeGpu count: 1 defaults: resource_class: baremetal.GPU network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1",
"(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeGpuNetworkConfigTemplate: /home/stack/templates/nic-configs/<gpu_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"ls /sys/class/mdev_bus/",
"ls /sys/class/mdev_bus/<mdev_device>/mdev_supported_types",
"ls /sys/class/mdev_bus/0000:84:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:85:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:86:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:87:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45",
"parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-35 - nvidia-36",
"parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-35 - nvidia-36 NovaVGPUTypesDeviceAddressesMapping: {'vgpu_<vgpu_type>': ['<pci_address>', '<pci_address>'],'vgpu_<vgpu_type>': ['<pci_address>', '<pci_address>']}",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_gpu.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/gpu.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_<TRAIT_NAME>",
"(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')",
"(overcloud)USD echo USDexisting_traits",
"(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait CUSTOM_<TRAIT_NAME> <host_uuid>",
"source ~/overcloudrc",
"(overcloud)USD openstack server create --flavor <flavor> --image <image> temp_vgpu_instance",
"(overcloud)USD openstack server image create --name vgpu_image temp_vgpu_instance",
"(overcloud)USD openstack flavor create --vcpus 6 --ram 8192 --disk 100 m1.small-gpu",
"(overcloud)USD openstack flavor set m1.small-gpu --property \"resources:VGPU=1\"",
"(overcloud)USD openstack flavor set m1.small-gpu --property trait:CUSTOM_NVIDIA_11=required",
"(overcloud)USD openstack server create --flavor m1.small-gpu --image vgpu_image --security-group web --nic net-id=internal0 --key-name lambda vgpu-instance",
"lspci -nn | grep <gpu_name>",
"lspci -nn | grep -i <gpu_name>",
"lspci -nn | grep -i nvidia 3b:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1) d8:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1db4] (rev a1)",
"lspci -v -s 3b:00.0 3b:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1) Capabilities: [bcc] Single Root I/O Virtualization (SR-IOV)",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"",
"ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"",
"parameter_defaults: NovaPCIPassthrough: - vendor_id: \"10de\" product_id: \"1eb8\"",
"ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"",
"ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"",
"parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthru_controller.yaml -e /home/stack/templates/pci_passthru_compute.yaml",
"openstack flavor set m1.large --property \"pci_passthrough:alias\"=\"t4:2\"",
"openstack server create --flavor m1.large --image <custom_gpu> --wait test-pci",
"lspci -nn | grep <gpu_name>",
"nvidia-smi",
"----------------------------------------------------------------------------- | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |------------------------------- ---------------------- ----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |=============================== ====================== ======================| | 0 Tesla T4 Off | 00000000:01:00.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | ------------------------------- ---------------------- ---------------------- ----------------------------------------------------------------------------- | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | -----------------------------------------------------------------------------",
"parameter_defaults: ComputeExtraConfig: nova::compute::force_config_drive: 'true'",
"parameter_defaults: ComputeExtraConfig: nova::compute::force_config_drive: 'true' nova::compute::config_drive_format: vfat",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml \\",
"(overcloud)USD openstack server create --flavor m1.tiny --image cirros test-config-drive-instance",
"mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config",
"blkid -t LABEL=\"config-2\" -odevice /dev/vdb mkdir -p /mnt/config mount /dev/vdb /mnt/config",
"parameter_defaults: ControllerExtraConfig: nova::vendordata::vendordata_providers: - DynamicJSON",
"parameter_defaults: ControllerExtraConfig: nova::vendordata::vendordata_providers: - DynamicJSON nova::vendordata::vendordata_dynamic_targets: \"target1@http://127.0.0.1:125\" nova::vendordata::vendordata_dynamic_targets: \"target2@http://127.0.0.1:126\"",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: <Role>Parameters: KernelArgsDeferReboot: True",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/kernelargs_manual_reboot.yaml",
"(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list",
"(overcloud)USD openstack compute service set <node> nova-compute --disable",
"(overcloud)USD openstack server list --host <node_UUID> --all-projects",
"[tripleo-admin@overcloud-compute-0 ~]USD sudo reboot",
"(overcloud)USD openstack compute service set <node_UUID> nova-compute --enable",
"(overcloud)USD openstack compute service list",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: NovaVNCProxySSLMinimumVersion: <version>",
"parameter_defaults: NovaVNCProxySSLCiphers: <ciphers>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"parameter_defaults: ComputeParameters: NovaEnableVTPM: True",
"(undercloud)USD openstack overcloud deploy --templates -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)USD openstack image create ... --property hw_tpm_version=2.0 vtpm-image",
"(overcloud)USD openstack image set --property hw_tpm_model=<tpm_model> vtpm-image",
"(overcloud)USD openstack server create --flavor m1.small --image vtpm-image vtpm-instance",
"dmesg | grep -i tpm",
"(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:tpm_version=2.0 vtpm-flavor",
"(overcloud)USD openstack flavor set --property hw:tpm_model=<tpm_model> vtpm-flavor",
"(overcloud)USD openstack server create --flavor vtpm-flavor --image rhel-image vtpm-instance",
"dmesg | grep -i tpm",
"parameter_defaults: NovaCronArchiveDeleteRowsPurge: True",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"openstack port show <port_name/port_id>",
"(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list",
"(overcloud)USD openstack server list --host <source> --all-projects",
"(overcloud)USD openstack compute service set <source> nova-compute --disable",
"(overcloud)USD openstack server migrate <instance> --wait",
"(overcloud)USD openstack server list --all-projects",
"(overcloud)USD openstack server resize --confirm <instance>",
"(overcloud)USD openstack server resize --revert <instance>",
"(overcloud)USD openstack server start <instance>",
"(overcloud)USD openstack compute service set <source> nova-compute --enable",
"(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait",
"(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait --block-migration",
"(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+",
"(overcloud)USD openstack server list --host <dest> --all-projects",
"(overcloud)USD openstack compute service set <source> nova-compute --enable",
"openstack server migration list --server <instance> +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...)",
"openstack server migration show <instance> <migration_id>",
"+------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+",
"(overcloud)USD openstack server list --host <node> --all-projects",
"(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | NONE | | ... | ... | +----------------------+--------------------------------------+",
"(overcloud)USD openstack baremetal node show <node>",
"(overcloud)USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>",
"(overcloud)USD openstack server evacuate [--host <dest>] [--password <password>] <instance>",
"(overcloud)[stack@director ~]USD openstack hypervisor list",
"(overcloud)USD openstack compute service set <node> nova-compute --enable",
"openstack server migration list --server <instance>",
"openstack server migration abort <instance> <migration_id>",
"openstack server migration list --server <instance>",
"openstack server migration force complete <instance> <migration_id>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/configuring_the_compute_service_for_instance_creation/index |
Opening a support case at Red Hat Support | Opening a support case at Red Hat Support Create a support case from Red Hat Insights at Red Hat Support by performing the following steps: Prerequisites You are logged in to the Red Hat Customer Portal. Procedure Access the Red Hat Hybrid Cloud Console : Click Help ? and select Open a support case . You are redirected to the Customer support page. From the Get Support page, select the type of issue that you want to report and click Continue . From the Summarize page, perform the following steps: On the Summary field, describe the issue. Note If Red Hat Insights is not auto-selected, you must manually select the product. From the Product dropdown menu, select Red Hat Insights . From the Version dropdown menu, select the component you have issues with. From the Review page, click Submit . A support case is created. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_and_managing_rhel_systems_in_hybrid_clouds/proc_opening-a-case-at-red-hat_host-management-services |
Chapter 30. Authentication and Interoperability | Chapter 30. Authentication and Interoperability sssd component, BZ# 1081046 The accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the Global Catalog by default. As a result, users with expired accounts can be allowed to log in when using GSSAPI authentication. To work around this problem, the Global Catalog support can be disabled by specifying ad_enable_gc=False in the sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication. Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count. ipa component, BZ# 1004156 When DNS support is being added for an Identity Management server (for example, by using ipa-dns-install or by using the --setup-dns option with the ipa-server-install or ipa-replica-install commands), the script adds a host name of a new Identity Management DNS server to the list of name servers in the primary Identity Management DNS zone (via DNS NS record). However, it does not add the DNS name server record to other DNS zones served by the Identity Management servers. As a consequence, the list of name servers in the non-primary DNS zones has only a limited set of Identity Management name servers serving the DNS zone (only one, without user intervention). When the limited set of Identity Management name servers is not available, these DNS zones are not resolvable. To work around this problem, manually add new DNS name server records to all non-primary DNS zones when a new Identity Management replica is being added. Also, manually remove such DNS name server records when the replica is being decommissioned. Non-primary DNS zones can maintain higher availability by having a manually maintained set of Identity Management name servers serving it. ipa component, BZ#971384 The default Unlock user accounts permission does not include the nsaccountlock attribute, which is necessary for a successful unlocking of a user entry. Consequently, a privileged user with this permission assigned cannot unlock another user, and errors like the following are displayed: To work around this problem, add nssacountlock to the list of allowed attributes in the aforementioned permission by running the following command: As a result, users with the Unlock user accounts permission assigned can unlock other users. ipa component, BZ# 973195 There are multiple problems across different tools used in the Identity Management installation, which prevents installation of user-provided certificates with intermediate certificate authority ( CA ). One of the errors is that incorrect trust flags are assigned to the intermediate CA certificate when importing a PKCS#12 file. Consequently, the Identity Management server installer fails due to an incomplete trust chain that is returned for Identity Management services. There is no known workaround, certificates not issued by the embedded Certificate Authority must not contain an intermediate CA in their trust chain. ipa component , BZ# 988473 Access control to lightweight directory access protocol ( LDAP ) objects representing trust with Active Directory (AD) is given to the Trusted Admins group in Identity Management. In order to establish the trust, the Identity Management administrator should belong to a group which is a member of the "Trusted Admins" group and this group should have relative identifier (RID) 512 assigned. To ensure this, run the ipa-adtrust-install command and then the ipa group-show admins --all command to verify that the "ipantsecurityidentifier" field contains a value ending with the "-512" string. If the field does not end with "-512", use the ipa group-mod admins --setattr=ipantsecurityidentifier=SID command, where SID is the value of the field from the ipa group-show admins --all command output with the last component value (-XXXX) replaced by the "-512" string. ipa component, BZ# 1084018 Red Hat Enterprise Linux 7 contains an updated version of slapi-nis , a Directory Server plug-in, which allows users of Identity Management and the Active Directory service to authenticate on legacy clients. However, the slapi-nis component only enables identity and authentication services, but does not allow users to change their password. As a consequence, users logged to legacy clients via slapi-nis compatibility tree can change their password only via the Identity Management Server Self-Service Web UI page or directly in Active Directory. ipa component, BZ# 1060349 The ipa host-add command does not verify the existence of AAAA records. As a consequence, ipa host-add fails if no A record is available for the host, although an AAAA record exists. To work around this problem, run ipa host-add with the --force option. ipa component, BZ# 1081626 An IPA master is uninstalled while SSL certificates for services other than IPA servers are tracked by the certmonger service. Consequently, an unexpected error can occur, and the uninstallation fails. To work around this problem, start certmonger , and run the ipa-getcert command to list the tracked certificates. Then run the ipa-getcert stop-tracking -i <Request ID> command to stop certmonger from tracking the certificates, and run the IPA uninstall script again. ipa component, BZ# 1088683 The ipa-client-install command does not process the --preserve-sssd option correctly when generating the IPA domain configuration in the sssd.conf file. As a consequence, the original configuration of the IPA domain is overwritten. To work around this problem, review sssd.conf after running ipa-client-install to identify and manually fix any unwanted changes. certmonger component, BZ# 996581 The directory containing a private key or certificate can have an incorrect SELinux context. As a consequence, the ipa-getcert request -k command fails, and an unhelpful error message is displayed. To work around this problem, set the SELinux context on the directory containing the certificate and the key to cert_t . You can resubmit an existing certificate request by running the ipa-getcert resubmit -i <Request ID> command. sssd component, BZ# 1103249 Under certain circumstances, the algorithm in the Privilege Attribute Certificate (PAC) responder component of the System Security Services Daemon (SSSD) does not effectively handle users who are members of a large number of groups. As a consequence, logging from Windows clients to Red Hat Enterprise Linux clients with Kerberos single sign-on (SSO) can be noticeably slow. There is currently no known workaround available. ipa component, BZ# 1033357 The ipactl restart command requires the directory server to be running. Consequently, if this condition is not met, ipactl restart fails with an error message. To work around this problem, use the ipactl start command to start the directory server before executing ipactl restart . Note that the ipactl status command can be used to verify if the directory server is running. pki-core component, BZ#1085105 The certificate subsystem fails to install if the system language is set to Turkish. To work around this problem, set the system language to English by putting the following line in the /etc/sysconfig/i18n file: LANG="en_US.UTF-8" Also, remove any other "LANG=" entries in /etc/sysconfig/i18n , then reboot the system. After reboot, you can successfully run ipa-server-install , and the original contents of /etc/sysconfig/i18n may be restored. ipa component, BZ# 1020563 The ipa-server-install and ipa-replica-install commands replace the list of NTP servers in the /etc/ntp.conf file with Red Hat Enterprise Linux default servers. As a consequence, NTP servers configured before installing IPA are not contacted, and servers from rhel.pool.ntp.org are contacted instead. If those default servers are unreachable, the IPA server does not synchronize its time via NTP. To work around this problem, add any custom NTP servers to /etc/ntp.conf , and remove the default Red Hat Enterprise Linux servers if required. The configured servers are now used for time synchronization after restarting the NTP service by running the systemctl restart ntpd.service command. gnutls component, BZ# 1084080 The gnutls utility fails to generate a non-encrypted private key when the user enters an empty password. To work around this problem, use the certtool command with the password option as follows: ~]USD certtool --generate-privkey --pkcs8 --password "" --outfile pkcs8.key | [
"ipa: ERROR: Insufficient access: Insufficient 'write' privilege to the 'nsAccountLock' attribute of entry 'uid=user,cn=users,cn=accounts,dc=example,dc=com'.",
"~]# ipa permission-mod \"Unlock user accounts\" --attrs={krbLastAdminUnlock,krbLoginFailedCount,nsaccountlock}"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/known-issues-authentication_and_interoperability |
3.5. Maintaining Consistent Schema | 3.5. Maintaining Consistent Schema A consistent schema within Directory Server helps LDAP client applications locate directory entries. Using an inconsistent schema makes it very difficult to efficiently locate information in the directory tree. Inconsistent schema use different attributes or formats to store the same information. Maintain schema consistency in the following ways: Use schema checking to ensure that attributes and object classes conform to the schema rules. Use syntax validation to ensure that attribute values match the required attribute syntax. Select and apply a consistent data format. 3.5.1. Schema Checking Schema checking ensures that all new or modified directory entries conform to the schema rules. When the rules are violated, the directory rejects the requested change. Note Schema checking checks only that the proper attributes are present. To verify that attribute values are in the correct syntax, use syntax validation, as described in Section 3.5.2, "Syntax Validation" . By default, the directory enables schema checking. Red Hat recommends not disabling this feature. For information on enabling and disabling schema checking, see the Red Hat Directory Server Administration Guide . With schema checking enabled, be attentive to required and allowed attributes as defined by the object classes. Object class definitions usually contain at least one required attribute and one or more optional attributes. Optional attributes are attributes that can be, but are not required to be, added to the directory entry. Attempting to add an attribute to an entry that is neither required nor allowed according to the entry's object class definition causes the Directory Server to return an object class violation message. For example, if an entry is defined to use the organizationalPerson object class, then the common name ( cn ) and surname ( sn ) attributes are required for the entry. That is, values for these attributes must be set when the entry is created. In addition, there is a long list of attributes that can optionally be used on the entry, including descriptive attributes like telephoneNumber , uid , streetAddress , and userPassword . 3.5.2. Syntax Validation Syntax validation means that the Directory Server checks that the value of an attribute matches the required syntax for that attribute. For example, syntax validation will confirm that a new telephoneNumber attribute actually has a valid telephone number for its value. 3.5.2.1. Overview of Syntax Validation By default, syntax validation is enabled. This is the most basic syntax validation. As with schema checking, this validates any directory modification and rejects changes that violate the syntax rules. Additional settings can be optionally configured so that syntax validation can log warning messages about syntax violations and then either reject the modification or allow the modification process to succeed. Syntax validation checks LDAP operations where a new attribute value is added, either because a new attribute is added or because an attribute value is changed. Syntax validation does not process existing attributes or attributes added through database operations like replication. Existing attributes can be validated using a special script, syntax-validate.pl . This feature validates all attribute syntaxes, with the exception of binary syntaxes (which cannot be verified) and non-standard syntaxes, which do not have a defined required format. The syntaxes are validated against RFC 4514 , except for DNs, which are validated against the less strict RFC 1779 or RFC 2253 . (Strict DN validation can be configured.) 3.5.2.2. Syntax Validation and Other Directory Server Operations Syntax validation is mainly relevant for standard LDAP operations like creating entries (add) or editing attributes (modify). Validating attribute syntax can impact other Directory Server operations, however. Database Encryption For normal LDAP operations, an attribute is encrypted just before the value is written to the database. This means That encryption occurs after the attribute syntax is validated. Encrypted databases (as described in Section 9.8, "Encrypting the Database" ) can be exported and imported. Normally, it is strongly recommended that these export and import operations are done with the -E flag with db2ldif and ldif2db , which allows syntax validation to occur just fine for the import operation. However, if the encrypted database is exported without using the -E flag (which is not supported), then an LDIF with encrypted values is created. When this LDIF is then imported, the encrypted attributes cannot be validated, a warning is logged, and attribute validation is skipped in the imported entry. Synchronization There may be differences in the allowed or enforced syntaxes for attributes in Windows Active Directory entries and Red Hat Directory Server entries. In that case, the Active Directory values could not be properly synced over because syntax validation enforces the RFC standards in the Directory Server entries. Replication If the Directory Server 11.0 instance is a supplier which replicates its changes to a consumer, then there is no issue with using syntax validation. However, if the supplier in replication is an older version of Directory Server or has syntax validation disabled, then syntax validation should not be used on the 11.0 consumer because the Directory Server 11.0 consumer may reject attribute values that the supplier allows. 3.5.3. Selecting Consistent Data Formats LDAP schema allows any data to be placed on any attribute value. However, it is important to store data consistently in the directory tree by selecting a format appropriate for the LDAP client applications and directory users. With the LDAP protocol and Directory Server, data must be represented in the data formats specified in RFC 2252. For example, the correct LDAP format for telephone numbers is defined in two ITU-T recommendations documents: ITU-T Recommendation E.123 . Notation for national and international telephone numbers. ITU-T Recommendation E.163 . Numbering plan for the international telephone services. For example, a US phone number is formatted as +1 555 222 1717 . As another example, the postalAddress attribute expects an attribute value in the form of a multi-line string that uses dollar signs (USD) as line delimiters. A properly formatted directory entry appears as follows: Attributes can require strings, binary input, integers, and other formats. The allowed format is set in the schema definition for the attribute. 3.5.4. Maintaining Consistency in Replicated Schema When the directory schema is edited, the changes are recorded in the changelog. During replication, the changelog is scanned for changes, and any changes are replicated. Maintaining consistency in replicated schema allows replication to continue smoothly. Consider the following points for maintaining consistent schema in a replicated environment: Do not modify the schema on a read-only replica. Modifying the schema on a read-only replica introduces an inconsistency in the schema and causes replication to fail. Do not create two attributes with the same name that use different syntaxes. If an attribute is created in a read-write replica that has the same name as an attribute on the supplier replica but has a different syntax from the attribute on the supplier, replication will fail. | [
"postalAddress: 1206 Directory DriveUSDPleasant View, MNUSD34200"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Schema-Maintaining_Consistent_Schema |
Chapter 17. Renaming, copying, or deleting assets | Chapter 17. Renaming, copying, or deleting assets After an asset has been created and defined, you can use the Repository View of the Project Explorer to copy, rename, delete, or archive assets as needed. Procedure In Business Central, go to Menu Design Projects and click the project name. Click the asset name and expand the Project Explorer by clicking on the upper-left corner. Click in the Project Explorer toolbar and select Repository View to display the folders and files that make up the asset. Use the icons to each listed asset to copy, rename, delete, or archive the asset as needed. Some of these options may not be available for all assets. Figure 17.1. Copy, rename, delete, or archive assets Use the following toolbar buttons to copy, rename, or delete assets. Figure 17.2. Toolbar options | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assets_renaming_proc |
Chapter 6. Working with nodes | Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Updating boot images The Machine Config Operator (MCO) uses a boot image to bring up a Red Hat Enterprise Linux CoreOS (RHCOS) node. By default, OpenShift Container Platform does not manage the boot image. This means that the boot image in your cluster is not updated along with your cluster. For example, if your cluster was originally created with OpenShift Container Platform 4.12, the boot image that the cluster uses to create nodes is the same 4.12 version, even if your cluster is at a later version. If the cluster is later upgraded to 4.13 or later, new nodes continue to scale with the same 4.12 image. This process could cause the following issues: Extra time to start up nodes Certificate expiration issues Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and is not supported for Cluster CAPI Operator managed clusters. Important The updating boot image feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To view the current boot image used in your cluster, examine a machine set: Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1 # ... 1 This boot image is the same as the originally-installed OpenShift Container Platform version, in this example OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see "Enabling features using feature gates" in the "Additional resources" section. Procedure Edit the MachineConfiguration object, named cluster , to enable the updating of boot images by running the following command: USD oc edit MachineConfiguration cluster Optional: Configure the boot image update feature for all the machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2 1 Activates the boot image update feature. 2 Specifies that all the machine sets in the cluster are to be updated. Optional: Configure the boot image update feature for specific machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: "true" 2 1 Activates the boot image update feature. 2 Specifies that any machine set with this label is to be updated. Tip If an appropriate label is not present on the machine set, add a key/value pair by running a command similar to following: Verification Get the boot image version by running the following command: USD oc get machinesets <machineset_name> -n openshift-machine-api -o yaml Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: "true" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1 # ... 1 This boot image is the same as the current OpenShift Container Platform version. Additional resources Enabling features using feature gates 6.3.2.1. Disabling updated boot images To disable the updated boot image feature, edit the MachineConfiguration object to remove the managedBootImages stanza. If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new OpenShift Container Platform version in the future. Procedure Disable updated boot images by editing the MachineConfiguration object: USD oc edit MachineConfiguration cluster Remove the managedBootImages stanza: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 1 Remove the entire stanza to disable updated boot images. 6.3.3. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.4. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.5. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.6. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Enabling swap memory is only available for container-native virtualization (CNV) users or use cases. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.7. Migrating control plane nodes from one RHOSP host to another manually If control plane machine sets are not enabled on your cluster, you can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Note Control plane machine sets are not enabled on clusters that run on user-provisioned infrastructure. For information about control plane machine sets, see "Managing control plane machines with control plane machine sets". Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. Additional resources Managing control plane machines with control plane machine sets 6.4. Adding worker nodes to an on-premise cluster For on-premise clusters, you can add worker nodes by using the OpenShift CLI ( oc ) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster. This process can be used regardless of how you installed your cluster. You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node. Any required configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes. Note Machine or BareMetalHost resources are not automatically created after a node has been successfully added to the cluster. Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node. 6.4.1. Supported platforms The following platforms are supported for this method of adding nodes: baremetal vsphere none external 6.4.2. Supported architectures The following architecture combinations have been validated to work when adding worker nodes using this process: amd64 worker nodes on amd64 or arm64 clusters arm64 worker nodes on amd64 or arm64 clusters s390x worker nodes on s390x clusters ppc64le worker nodes on ppc64le clusters 6.4.3. Adding nodes to your cluster You can add nodes with this method in the following two ways: Adding one or more nodes using a configuration file. You can specify configurations for one or more nodes in the nodes-config.yaml file before running the oc adm node-image create command. This is useful if you want to add more than one node at a time, or if you are specifying complex configurations. Adding a single node using only command flags. You can add a node by running the oc adm node-image create command with flags to specify your configurations. This is useful if you want to add only a single node at a time, and have only simple configurations to specify for that node. 6.4.3.1. Adding one or more nodes using a configuration file You can add one or more nodes to your cluster by using the nodes-config.yaml file to specify configurations for the new nodes. Prerequisites You have installed the OpenShift CLI ( oc ) You have an active connection to your target cluster You have a kubeconfig file available Procedure Create a new YAML file that contains configurations for the nodes you are adding and is named nodes-config.yaml . You must provide a MAC address for each new node. In the following example file, two new workers are described with an initial static network configuration: Example nodes-config.yaml file hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false Generate the ISO image by running the following command: USD oc adm node-image create nodes-config.yaml Important In order for the create command to fetch a release image that matches the target cluster version, you must specify a valid pull secret. You can specify the pull secret either by using the --registry-config flag or by setting the REGISTRY_AUTH_FILE environment variable beforehand. Note If the directory of the nodes-config.yaml file is not specified by using the --dir flag, the tool looks for the file in the current directory. Verify that a new node.<arch>.iso file is present in the asset directory. The asset directory is your current directory, unless you specified a different one when creating the ISO image. Boot the selected node with the generated ISO image. Track progress of the node creation by running the following command: USD oc adm node-image monitor --ip-addresses <ip_addresses> where: <ip_addresses> Specifies a list of the IP addresses of the nodes that are being added. Note If reverse DNS entries are not available for your node, the oc adm node-image monitor command skips checks for pending certificate signing requests (CSRs). If these checks are skipped, you must manually check for CSRs by running the oc get csr command. Approve the CSRs by running the following command for each CSR: USD oc adm certificate approve <csr_name> 6.4.3.2. Adding a node with command flags You can add a single node to your cluster by using command flags to specify configurations for the new node. Prerequisites You have installed the OpenShift CLI ( oc ) You have an active connection to your target cluster You have a kubeconfig file available Procedure Generate the ISO image by running the following command. The MAC address must be specified using a command flag. See the "Cluster configuration reference" section for more flags that you can use with this command. USD oc adm node-image create --mac-address=<mac_address> where: <mac_address> Specifies the MAC address of the node that is being added. Important In order for the create command to fetch a release image that matches the target cluster version, you must specify a valid pull secret. You can specify the pull secret either by using the --registry-config flag or by setting the REGISTRY_AUTH_FILE environment variable beforehand. Tip To see additional flags that can be used to configure your node, run the following oc adm node-image create --help command. Verify that a new node.<arch>.iso file is present in the asset directory. The asset directory is your current directory, unless you specified a different one when creating the ISO image. Boot the node with the generated ISO image. Track progress of the node creation by running the following command: USD oc adm node-image monitor --ip-addresses <ip_address> where: <ip_address> Specifies a list of the IP addresses of the nodes that are being added. Note If reverse DNS entries are not available for your node, the oc adm node-image monitor command skips checks for pending certificate signing requests (CSRs). If these checks are skipped, you must manually check for CSRs by running the oc get csr command. Approve the pending CSRs by running the following command for each CSR: USD oc adm certificate approve <csr_name> 6.4.4. Cluster configuration reference When creating the ISO image, configurations are retrieved from the target cluster and are applied to the new nodes. Any configurations for your cluster are applied to the nodes unless you override the configurations in either the nodes-config.yaml file or any flags you add to the oc adm node-image create command. 6.4.4.1. YAML file parameters Configuration parameters that can be specified in the nodes-config.yaml file are described in the following table: Table 6.2. nodes-config.yaml parameters Parameter Description Values Host configuration. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the nodes-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The node-adding tool examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Optional. Specifies the architecture of the nodes you are adding. This parameter allows you to override the default value from the cluster when required. String. Optional. The file containing the SSH key to authenticate access to your cluster machines. String. Optional. Specifies the URL of the server to upload Preboot Execution Environment (PXE) assets to when you are generating an iPXE script. You must also set the --pxe flag to generate PXE assets instead of an ISO image. String. 6.4.4.2. Command flag options You can use command flags with the oc adm node-image create command to configure the nodes you are creating. The following table describes command flags that are not limited to the single-node use case: Table 6.3. General command flags Flag Description Values --certificate-authority The path to a certificate authority bundle to use when communicating with the managed container image registries. If the --insecure flag is used, this flag is ignored. String --dir The path containing the configuration file, if provided. This path is also used to store the generated artifacts. String --insecure Allows push and pull operations to registries to be made over HTTP. Boolean -o , --output-name The name of the generated output image. String p , --pxe Generates Preboot Execution Environment (PXE) assets instead of a bootable ISO file. When this flag is set, you can also use the bootArtifactsBaseURL parameter in the nodes-config.yaml file to specify URL of the server you will upload PXE assets to. Boolean -a , --registry-config The path to your registry credentials. Alternatively, you can specify the REGISTRY_AUTH_FILE environment variable. The default paths are USD{XDG_RUNTIME_DIR}/containers/auth.json , /run/containers/USD{UID}/auth.json , USD{XDG_CONFIG_HOME}/containers/auth.json , USD{DOCKER_CONFIG} , ~/.docker/config.json , ~/.dockercfg. The order can be changed through the deprecated REGISTRY_AUTH_PREFERENCE environment variable to a "docker" value, in order to prioritize Docker credentials over Podman. String -r , --report Generates a report of the node creation process regardless of whether the process is successful or not. If you do not specify this flag, reports are generated only in cases of failure. Boolean --skip-verification An option to skip verifying the integrity of the retrieved content. This is not recommended, but might be necessary when importing images from older image registries. Bypass verification only if the registry is known to be trustworthy. Boolean The following table describes command flags that can be used only when creating a single node: Table 6.4. Single-node only command flags Flag Description Values -c , --cpu-architecture The CPU architecture to be used to install the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --hostname The hostname to be set for the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String -m , --mac-address The MAC address used to identify the host to apply configurations to. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --network-config-path The path to a YAML file containing NMState configurations to be applied to the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --root-device-hint A hint for specifying the storage location for the image root filesystem. The accepted format is <hint_name>:<value> . This flag can be used to create only a single node, and the --mac-address flag must be defined. String -k , --ssh-key-path The path to the SSH key used to access the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String Additional resources Root device hints 6.5. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.5.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.6. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.6.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.6.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.6.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.6.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.7. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.8. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.8.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.8.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.8.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.8.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.9. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.9.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.5. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.9.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.6. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.9.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.10. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.10.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.10.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.10.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.10.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.10.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.10.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.10.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.11. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.11.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.12. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.12.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.7. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.12.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.13. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.13.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.13.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets | [
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>",
"oc label nodes webconsole-7f7f6 unhealthy=true",
"kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #",
"oc label pods --all <key_1>=<value_1>",
"oc label pods --all status=unhealthy",
"oc adm cordon <node>",
"oc adm cordon node1.example.com",
"node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled",
"oc adm uncordon <node1>",
"oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>",
"oc get machinesets -n openshift-machine-api",
"oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api",
"oc edit machineset <machine-set-name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get machineconfigpool --show-labels",
"NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False",
"oc label machineconfigpool worker custom-kubelet=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #",
"oc create -f <file-name>",
"oc create -f master-kube-config.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2",
"oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api",
"oc get machinesets <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All",
"oc edit schedulers.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #",
"oc create -f 99-worker-setsebool.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"oc label machineconfigpool worker kubelet-swap=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #",
"#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done",
"hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false",
"oc adm node-image create nodes-config.yaml",
"oc adm node-image monitor --ip-addresses <ip_addresses>",
"oc adm certificate approve <csr_name>",
"oc adm node-image create --mac-address=<mac_address>",
"oc adm node-image monitor --ip-addresses <ip_address>",
"oc adm certificate approve <csr_name>",
"hosts:",
"hosts: hostname:",
"hosts: interfaces:",
"hosts: interfaces: name:",
"hosts: interfaces: macAddress:",
"hosts: rootDeviceHints:",
"hosts: rootDeviceHints: deviceName:",
"hosts: networkConfig:",
"cpuArchitecture:",
"sshKey:",
"bootArtifactsBaseURL:",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"oc debug node/<node1>",
"chroot /host",
"systemctl reboot",
"ssh core@<master-node>.<cluster_name>.<base_domain>",
"sudo systemctl reboot",
"oc adm uncordon <node1>",
"ssh core@<target_node>",
"sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"oc debug node/<node_name>",
"chroot /host",
"SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #",
"oc create -f <file_name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/working-with-nodes |
Chapter 88. versions | Chapter 88. versions This chapter describes the commands under the versions command. 88.1. versions show Show available versions of services Usage: Table 88.1. Command arguments Value Summary -h, --help Show this help message and exit --all-interfaces Show values for all interfaces --interface <interface> Show versions for a specific interface. --region-name <region_name> Show versions for a specific region. --service <service> Show versions for a specific service. the argument should be either an exact match to what is in the catalog or a known official value or alias from service-types-authority ( https://service- types.openstack.org/) --status <status> Show versions for a specific status. valid values are: - SUPPORTED - CURRENT - DEPRECATED - EXPERIMENTAL Table 88.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 88.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack versions show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-interfaces | --interface <interface>] [--region-name <region_name>] [--service <service>] [--status <status>]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/versions |
Chapter 2. Common configuration properties | Chapter 2. Common configuration properties Use Common configuration properties to configure Streams for Apache Kafka custom resources. You add common configuration properties to a custom resource like any other supported configuration for that resource. 2.1. replicas Use the replicas property to configure replicas. The type of replication depends on the resource. KafkaTopic uses a replication factor to configure the number of replicas of each partition within a Kafka cluster. Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability. Note When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running. 2.2. bootstrapServers Use the bootstrapServers property to configure a list of bootstrap servers. The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by Streams for Apache Kafka. If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME -kafka-bootstrap and a port number. If deployed by Streams for Apache Kafka but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, ingress, nodeports or loadbalancers). When using Kafka with a Kafka cluster not managed by Streams for Apache Kafka, you can specify the bootstrap servers list according to the configuration of the given cluster. 2.3. ssl (supported TLS versions and cipher suites) You can incorporate SSL configuration and cipher suite specifications to further secure TLS-based communication between your client application and a Kafka cluster. In addition to the standard TLS configuration, you can specify a supported TLS version and enable cipher suites in the configuration for the Kafka broker. You can also add the configuration to your clients if you wish to limit the TLS versions and cipher suites they use. The configuration on the client must only use protocols and cipher suites that are enabled on the broker. A cipher suite is a set of security mechanisms for secure connection and data transfer. For example, the cipher suite TLS_AES_256_GCM_SHA384 is composed of the following mechanisms, which are used in conjunction with the TLS protocol: AES (Advanced Encryption Standard) encryption (256-bit key) GCM (Galois/Counter Mode) authenticated encryption SHA384 (Secure Hash Algorithm) data integrity protection The combination is encapsulated in the TLS_AES_256_GCM_SHA384 cipher suite specification. The ssl.enabled.protocols property specifies the available TLS versions that can be used for secure communication between the cluster and its clients. The ssl.protocol property sets the default TLS version for all connections, and it must be chosen from the enabled protocols. Use the ssl.endpoint.identification.algorithm property to enable or disable hostname verification (configurable only in components based on Kafka clients - Kafka Connect, MirrorMaker 1/2, and Kafka Bridge). Example SSL configuration # ... config: ssl.cipher.suites: TLS_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 1 ssl.enabled.protocols: TLSv1.3, TLSv1.2 2 ssl.protocol: TLSv1.3 3 ssl.endpoint.identification.algorithm: HTTPS 4 # ... 1 Cipher suite specifications enabled. 2 TLS versions supported. 3 Default TLS version is TLSv1.3 . If a client only supports TLSv1.2, it can still connect to the broker and communicate using that supported version, and vice versa if the configuration is on the client and the broker only supports TLSv1.2. 4 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 2.4. trustedCertificates Having set tls to configure TLS encryption, use the trustedCertificates property to provide a list of secrets with key names under which the certificates are stored in X.509 format. You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-TLS-CERTIFICATE-FILE.crt Example TLS encryption configuration tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt If certificates are stored in the same secret, it can be listed multiple times. If you want to enable TLS encryption, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array: Example of enabling TLS with the default Java certificates tls: trustedCertificates: [] For information on configuring mTLS authentication, see the KafkaClientAuthenticationTls schema reference . 2.5. resources Configure resource requests and limits to control resources for Streams for Apache Kafka containers. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of Kafka. How you configure resources in a production environment depends on a number of factors. For example, applications are likely to be sharing resources in your OpenShift cluster. For Kafka, the following aspects of a deployment can impact the resources you need: Throughput and size of messages The number of network threads handling messages The number of producers and consumers The number of topics and partitions The values specified for resource requests are reserved and always available to the container. Resource limits specify the maximum resources that can be consumed by a given container. The amount between the request and limit is not reserved and might not be always available. A container can use the resources up to the limit only when they are available. Resource limits are temporary and can be reallocated. Resource requests and limits If you set limits without requests or vice versa, OpenShift uses the same value for both. Setting equal requests and limits for resources guarantees quality of service, as OpenShift will not kill containers unless they exceed their limits. You can configure resource requests and limits for one or more supported resources. Example resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... resources: requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" entityOperator: #... topicOperator: #... resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource. If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled. Note Streams for Apache Kafka uses the OpenShift syntax for specifying memory and cpu resources. For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers . Memory resources When configuring memory resources, consider the total requirements of the components. Kafka runs inside a JVM and uses an operating system page cache to store message data before writing to disk. The memory request for Kafka should fit the JVM heap and page cache. You can configure the jvmOptions property to control the minimum and maximum heap size. Other components don't rely on the page cache. You can configure memory resources without configuring the jvmOptions to control the heap size. Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. Use the following suffixes in the specification: M for megabytes G for gigabytes Mi for mebibytes Gi for gibibytes Example resources using different memory units # ... resources: requests: memory: 512Mi limits: memory: 2Gi # ... For more details about memory specification and additional supported units, see Meaning of memory . CPU resources A CPU request should be enough to give a reliable performance at any time. CPU requests and limits are specified as cores or millicpus / millicores . CPU cores are specified as integers ( 5 CPU core) or decimals ( 2.5 CPU core). 1000 millicores is the same as 1 CPU core. Example CPU units # ... resources: requests: cpu: 500m limits: cpu: 2.5 # ... The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed. For more information on CPU specification, see Meaning of CPU . 2.6. image Use the image property to configure the container image used by the component. Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image. For example, if your network does not allow access to the container repository used by Streams for Apache Kafka, you can copy the Streams for Apache Kafka images or build them from the source. However, if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly. A copy of the container image might also be customized and used for debugging. You can specify which container image to use for a component using the image property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.tlsSidecar Kafka.spec.cruiseControl Kafka.spec.kafkaExporter Kafka.spec.kafkaBridge KafkaConnect.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec Note Changing the Kafka image version does not automatically update the image versions for other Kafka components, such as Kafka Exporter. These components are not version dependent, so no additional configuration is necessary when updating the Kafka image version. Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker Kafka, Kafka Connect, and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables: STRIMZI_KAFKA_IMAGES STRIMZI_KAFKA_CONNECT_IMAGES STRIMZI_KAFKA_MIRROR_MAKER2_IMAGES (Deprecated) STRIMZI_KAFKA_MIRROR_MAKER_IMAGES These environment variables contain mappings between Kafka versions and corresponding images. The mappings are used together with the image and version properties to determine the image used: If neither image nor version are given in the custom resource, the version defaults to the Cluster Operator's default Kafka version, and the image used is the one corresponding to this version in the environment variable. If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator's default Kafka version. If version is given but image is not, then the image that corresponds to the given version in the environment variable is used. If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version. The image and version for the components can be configured in the following properties: For Kafka in spec.kafka.image and spec.kafka.version . For Kafka Connect and Kafka MirrorMaker in spec.image and spec.version . Warning It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator's environment variables. Configuring the image property in other resources For the image property in the custom resources for other components, the given value is used during deployment. If the image property is not set, the container image specified as an environment variable in the Cluster Operator configuration is used. If an image name is not defined in the Cluster Operator configuration, then a default value is used. For more information on image environment variables, see Configuring the Cluster Operator . Table 2.1. Image environment variables and defaults Component Environment variable Default image Topic Operator STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 User Operator STRIMZI_DEFAULT_USER_OPERATOR_IMAGE registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 Entity Operator TLS sidecar STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 Kafka Exporter STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 Cruise Control STRIMZI_DEFAULT_CRUISE_CONTROL_IMAGE registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 Kafka Bridge STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE registry.redhat.io/amq-streams/bridge-rhel9:2.7.0 Kafka initializer STRIMZI_DEFAULT_KAFKA_INIT_IMAGE registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 Example container image configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ... 2.7. livenessProbe and readinessProbe healthchecks Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in Streams for Apache Kafka. Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it. For more details about the probes, see Configure Liveness and Readiness Probes . Both livenessProbe and readinessProbe support the following options: initialDelaySeconds timeoutSeconds periodSeconds successThreshold failureThreshold Example of liveness and readiness probe configuration # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... For more information about the livenessProbe and readinessProbe options, see the Probe schema reference . 2.8. metricsConfig Use the metricsConfig property to enable and configure Prometheus metrics. The metricsConfig property contains a reference to a ConfigMap that has additional configurations for the Prometheus JMX Exporter . Streams for Apache Kafka supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . When referencing an empty file, all metrics are exposed as long as they have not been renamed. Example ConfigMap with metrics configuration for Kafka kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: "USD3" topic: "USD4" partition: "USD5" # further configuration Example metrics configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # ... zookeeper: # ... When metrics are enabled, they are exposed on port 9404. When the metricsConfig (or deprecated metrics ) property is not defined in the resource, the Prometheus metrics are disabled. For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka . 2.9. jvmOptions The following Streams for Apache Kafka components run inside a Java Virtual Machine (JVM): Apache Kafka Apache ZooKeeper Apache Kafka Connect Apache Kafka MirrorMaker Streams for Apache Kafka Bridge To optimize their performance on different platforms and architectures, you configure the jvmOptions property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.topicOperator Kafka.spec.cruiseControl KafkaNodePool.spec KafkaConnect.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec You can specify the following options in your configuration: -Xms Minimum initial allocation heap size when the JVM starts -Xmx Maximum heap size -XX Advanced runtime options for the JVM javaSystemProperties Additional system properties gcLoggingEnabled Enables garbage collector logging Note The units accepted by JVM settings, such as -Xmx and -Xms , are the same units accepted by the JDK java binary in the corresponding image. Therefore, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is different from the units used for memory requests and limits , which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes. -Xms and -Xmx options In addition to setting memory request and limit values for your containers, you can use the -Xms and -Xmx JVM options to set specific heap sizes for your JVM. Use the -Xms option to set an initial heap size and the -Xmx option to set a maximum heap size. Specify heap size to have more control over the memory allocated to your JVM. Heap sizes should make the best use of a container's memory limit (and request) without exceeding it. Heap size and any other memory requirements need to fit within a specified memory limit. If you don't specify heap size in your configuration, but you configure a memory resource limit (and request), the Cluster Operator imposes default heap sizes automatically. The Cluster Operator sets default maximum and minimum heap values based on a percentage of the memory resource configuration. The following table shows the default heap values. Table 2.2. Default heap settings for components Component Percent of available memory allocated to the heap Maximum limit Kafka 50% 5 GB ZooKeeper 75% 2 GB Kafka Connect 75% None MirrorMaker 2 75% None MirrorMaker 75% None Cruise Control 75% None Kafka Bridge 50% 31 Gi If a memory limit (and request) is not specified, a JVM's minimum heap size is set to 128M . The JVM's maximum heap size is not defined to allow the memory to increase as needed. This is ideal for single node environments in test and development. Setting an appropriate memory request can prevent the following: OpenShift killing a container if there is pressure on memory from other pods running on the node. OpenShift scheduling a container to a node with insufficient memory. If -Xms is set to -Xmx , the container will crash immediately; if not, the container will crash at a later time. In this example, the JVM uses 2 GiB (=2,147,483,648 bytes) for its heap. Total JVM memory usage can be a lot more than the maximum heap size. Example -Xmx and -Xms configuration # ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ... Setting the same value for initial ( -Xms ) and maximum ( -Xmx ) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. Important Containers performing lots of disk I/O, such as Kafka broker containers, require available memory for use as an operating system page cache. For such containers, the requested memory should be significantly higher than the memory used by the JVM. -XX option -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka. Example -XX configuration jvmOptions: "-XX": "UseG1GC": "true" "MaxGCPauseMillis": "20" "InitiatingHeapOccupancyPercent": "35" "ExplicitGCInvokesConcurrent": "true" JVM options resulting from the -XX configuration Note When no -XX options are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used. javaSystemProperties javaSystemProperties are used to configure additional Java system properties, such as debugging utilities. Example javaSystemProperties configuration jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl For more information about the jvmOptions , see the JvmOptions schema reference . 2.10. Garbage collector logging The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows: Example GC logging configuration # ... jvmOptions: gcLoggingEnabled: true # ... | [
"config: ssl.cipher.suites: TLS_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 1 ssl.enabled.protocols: TLSv1.3, TLSv1.2 2 ssl.protocol: TLSv1.3 3 ssl.endpoint.identification.algorithm: HTTPS 4",
"create secret generic MY-SECRET --from-file= MY-TLS-CERTIFICATE-FILE.crt",
"tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt",
"tls: trustedCertificates: []",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" entityOperator: # topicOperator: # resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\"",
"resources: requests: memory: 512Mi limits: memory: 2Gi",
"resources: requests: cpu: 500m limits: cpu: 2.5",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #",
"readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5",
"kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: \"USD3\" topic: \"USD4\" partition: \"USD5\" # further configuration",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # zookeeper: #",
"jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\"",
"jvmOptions: \"-XX\": \"UseG1GC\": \"true\" \"MaxGCPauseMillis\": \"20\" \"InitiatingHeapOccupancyPercent\": \"35\" \"ExplicitGCInvokesConcurrent\": \"true\"",
"-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC",
"jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl",
"jvmOptions: gcLoggingEnabled: true"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/con-common-configuration-properties-reference |
29.4. Removing Keytabs | 29.4. Removing Keytabs Removing a keytab and creating a new keytab is necessary for example when you unenroll and re-enroll a host or when you experience Kerberos connection errors. To remove all keytabs on a host, use the ipa-rmkeytab utility, and pass these options: --realm ( -r ) to specify the Kerberos realm --keytab ( -k ) to specify the path to the keytab file To remove a keytab for a specific service, use the --principal ( -p ) option to specify the service principal: | [
"ipa-rmkeytab --realm EXAMPLE.COM --keytab /etc/krb5.keytab",
"ipa-rmkeytab --principal ldap/client.example.com --keytab /etc/krb5.keytab"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/removing-keytabs |
Chapter 1. Content patching overview | Chapter 1. Content patching overview Patching leverages Red Hat software and management automation expertise to enable consistent patch workflows for Red Hat Enterprise Linux (RHEL) systems across the open hybrid cloud. It provides a single canonical view of applicable advisories across all of your deployments, whether they be Red Hat Satellite, hosted Red Hat Subscription Management (RHSM), or the public cloud. Use content patching in Insights to see all of the applicable Red Hat and Extra Packages for Enterprise Linux (EPEL) advisories for your RHEL systems checking into Insights. patch any system with one or more advisories by using remediation playbooks. see package updates available for Red Hat and non-Red Hat repositories as of the last system checkin. Your host must be running Red Hat Enterprise Linux (RHEL) 7, RHEL 8.6+ or RHEL 9 and it must maintain a fresh yum/dnf cache. Note Configure role-based access control (RBAC) in Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Users . See User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP for more information about this feature and example use cases. 1.1. Criteria for patch and vulnerability errata The content patching function collects a variety of data to create meaningful and actionable errata for your systems. The Insights client collects the following data on each checkin: List of installed packages, including name, epoch, version, release, and architecture (NEVRA) List of enabled modules (RHEL 8 and later) List of enabled repositories Output of yum updateinfo -C or dnf updateinfo -C Release version from systems with a version lock System architecture (eg. x86_64 ) Additionally, Insights for Red Hat Enterprise Linux collects metadata from the following data sources: Metadata from product repositories delivered by the Red Hat Content Delivery Network (CDN) Metadata from Extra Packages for Enterprise Linux (EPEL) repositories Red Hat Open Vulnerability and Assessment Language (OVAL) feed Insights for Red Hat Enterprise Linux compares the set of system data to the collected errata and vulnerability metadata in order to generate a set of available updates for each system. These updates include package updates, Red Hat errata, and Common Vulnerabilities and Exposures (CVEs). Additional resources For more information about Common Vulnerabilities and Exposures (CVEs), refer to the following resources: Assessing and Monitoring Security Vulnerabilities on RHEL Systems with FedRAMP Security > Vulnerability > CVEs 1.2. Reviewing and filtering applicable advisories and systems in the inventory You can see all of the applicable advisories and installed packages for systems checking into Red Hat Insights for Red Hat Enterprise Linux. Procedure On Red Hat Hybrid Cloud Console , navigate to Content > Advisories . You can also search for advisories by name using the search box, and filter advisories by: Type - Security, Bugfix, Enhancement, Unknown Publish date - Last 7 days, 30 days, 90 days, Last year, or More than 1 year ago Navigate to Content > Systems to see a list of affected systems you can patch with applicable advisories. You can also search for specific systems using the search box. Navigate to Content > Packages to see a list of packages with updates available in your environment. You can also search for specific packages using the search box. 1.3. System patching using Insights remediation playbooks The following steps demonstrate the patching workflow from the Content > Advisories page in Red Hat Insights for Red Hat Enterprise Linux: Procedure On Red Hat Hybrid Cloud Console , navigate to Content > Advisories . Click the advisory you want to apply to affected systems. You will see a description of the advisory, a link to view packages and errata at access.redhat.com, and a list of affected systems. The total number of applicable advisories of each type (Security, Bugfix, Enhancement) against each system are also displayed. Select the system(s) for which you want to create a playbook, then click Remediate . You can choose to modify an existing Playbook or create a new one. Accordingly, select Existing Playbook and the playbook name from the drop-down list, then click . Or, select Create new Playbook and enter a name for your playbook, then click . On the left navigation, click on Remediations . Click on the playbook name to see the playbook details, or simply select and click Download playbook . 1.4. Updating errata for systems managed by Red Hat Satellite Insights for Red Hat Enterprise Linux calculates applicable updates based on the packages, repositories, and modules that a system reports when it checks in. Insights combines these results with a client-side evaluation, and stores the resulting superset of updates as applicable updates. A system check-in to Red Hat Insights includes the following content-related data: Installed packages Enabled repositories Enabled modules List of updates, which the client determines using the dnf updateinfo -C command. This command primarily captures package updates for non-Red Hat repositories Insights uses this collection of data to calculate applicable updates for the system. Sometimes Insights calculates applicable updates for systems managed by Red Hat Satellite and reports inaccurate results. This issue can manifest in two ways: Insights shows installable updates that cannot be installed on the Satellite-managed system. Insights shows applicable updates that match what can be installed on the system immediately after patching, but shows outdated or missing updates a day or two later. This can occur when the system is subscribed to RHEL repositories that have been renamed. Insights now provides an optional check-in command to provide accurate reporting for applicable updates on Satellite-managed systems. This option rebuilds the yum/dnf package caches and creates a refreshed list of applicable updates for the system. Note Satellite-managed systems are not eligible to have Red Hat Insights content templates applied. Prerequisites Admin-level access to the system Procedure To rebuild the package caches from the command line, enter the following command: The command regenerates the dnf/yum caches and collects the relevant installable errata from Satellite. insights-client then generates a refreshed list of updates and sends it to Insights. Note The generated list of updates is equivalent to the output from the command dnf updateinfo list . 1.4.1. Configuring automatic check-in for insights-client You can edit the insights-client configuration file on your system ( /etc/insights-client/insights-client.conf ) to rebuild the package caches automatically each time the system checks in to Insights. Procedure Open the /etc/insights-client/insights-client.conf file in a text editor. Look in the file for the following comment: Add the following line after the comment: Save your edits and exit the editor. When the system checks in to Satellite, insights-client executes a yum/dnf cache refresh before collecting the output of the client-side evaluation. Insights then reports the client-side evaluation output as installable updates. The evaluation output, based on what has been published to the CDN, is reported as applicable updates. Additional resources For more information about the --build-packagecache options, see the following KCS article: https://access.redhat.com/solutions/7041171 For more information about managing errata in Red Hat Satellite, see https://access.redhat.com/documentation/en-us/red_hat_satellite/6.15/html/managing_content/managing_errata_content-management . 1.5. Enabling notifications and integrations You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever the patch service detects an issue and generates an advisory. Using the notifications service frees you from having to continually check the Red Hat Insights for Red Hat Enterprise Linux dashboard for advisories. For example, you can configure the notifications service to automatically send an email message whenever the patch service generates an advisory. Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User Access group with the Notifications-administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization Administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. In addition to sending email messages, you can configure the notifications service to send event data using an authenticated client to query Red Hat Insights APIs. Additional resources For more information about how to set up notifications for patch advisories, see Configuring notifications on the Red Hat Hybrid Cloud Console with FedRAMP . | [
"insights-client --build-packagecache",
"#Set build_packagecache=True to refresh the yum/dnf cache during the insights-client check-in",
"build_packagecache=True"
]
| https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/managing_system_content_and_patch_updates_with_red_hat_insights_with_fedramp/patch-service-overview |
Installing Red Hat Trusted Application Pipeline | Installing Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to install Red Hat Trusted Application Pipeline in your cluster. Red Hat Customer Content Services | [
"podman login registry.redhat.io",
"podman pull registry.redhat.io/rhtap-cli/rhtap-cli-rhel9:latest",
"podman run -it --entrypoint=bash --publish 8228:8228 --rm rhtap-cli:latest",
"bash-5.1USD oc login https://api.<input omitted>.openshiftapps.com:443 --username cluster-admin --password <input omitted>",
"bash-5.1USD rhtap-cli integration github-app --create --token=\"USDGH_TOKEN\" --org=\"USDGH_ORG_NAME\" USDGH_APP_NAME",
"bash-5.1USD rhtap-cli integration acs --endpoint=\"USDACS_ENDPOINT\" --token=\"USDACS_TOKEN\"",
"bash-5.1USD rhtap-cli integration quay --dockerconfigjson='USDQUAY_DOCKERCONFIGJSON' --token=\"USDQUAY_TOKEN\" --url=\"USDQUAY_URL\"",
"bash-5.1USD rhtap-cli integration bitbucket --username=\"USDBB_USERNAME\" --app-password=\"USDBB_TOKEN\" --host=\"USDBB_URL\"",
"bash-5.1USD rhtap-cli integration gitlab --token=\"USDGL_API_TOKEN\" --host=\"USDGL_URL\"",
"bash-5.1USD rhtap-cli integration jenkins --token=\"USDJK_API_TOKEN\" --url=\"USDJK_URL\" --username=\"USDJK_USERNAME\"",
"bash-5.1USD rhtap-cli integration artifactory --url=\"USDAF_URL\" --dockerconfigjson='USDAF_DOCKERCONFIGJSON' --token=\"USDAF_API_TOKEN\"",
"bash-5.1USD cp config.yaml my-config.yaml",
"bash-5.1USD vi my-config.yaml",
"redHatDeveloperHub: enabled: &rhdhEnabled true namespace: *installerNamespace properties: catalogURL: https://github.com/<your username>/tssc-sample-templates/blob/release/all.yaml",
"redHatAdvancedClusterSecurity: enabled: &rhacsEnabled false namespace: rhtap-acs",
"bash-5.1USD rhtap-cli deploy --config=USDCONFIG"
]
| https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html-single/installing_red_hat_trusted_application_pipeline/registry.redhat.io |
Chapter 3. Common deployment patterns | Chapter 3. Common deployment patterns Red Hat AMQ 7 can be set up in a large variety of topologies. The following are some of the common deployment patterns you can implement using AMQ components. 3.1. Central broker The central broker pattern is relatively easy to set up and maintain. It is also relatively robust. Routes are typically local, because the broker and its clients are always within one network hop of each other, no matter how many nodes are added. This pattern is also known as hub and spoke , with the central broker as the hub and the clients the spokes. Figure 3.1. Central broker pattern The only critical element is the central broker node. The focus of your maintenance efforts is on keeping this broker available to its clients. 3.2. Routed messaging When routing messages to remote destinations, the broker stores them in a local queue before forwarding them to their destination. However, sometimes an application requires sending request and response messages in real time, and having the broker store and forward messages is too costly. With AMQ you can use a router in place of a broker to avoid such costs. Unlike a broker, a router does not store messages before forwarding them to a destination. Instead, it works as a lightweight conduit and directly connects two endpoints. Figure 3.2. Brokerless routed messaging pattern 3.3. Highly available brokers To ensure brokers are available for their clients, deploy a highly available (HA) master-slave pair to create a backup group. You might, for example, deploy two master-slave groups on two nodes. Such a deployment would provide a backup for each active broker, as seen in the following diagram. Figure 3.3. Master-slave pair Under normal operating conditions one master broker is active on each node, which can be either a physical server or a virtual machine. If one node fails, the slave on the other node takes over. The result is two active brokers residing on the same healthy node. By deploying master-slave pairs, you can scale out an entire network of such backup groups. Larger deployments of this type are useful for distributing the message processing load across many brokers. The broker network in the following diagram consists of eight master-slave groups distributed over eight nodes. Figure 3.4. Master-slave network 3.4. Router pair behind a load balancer Deploying two routers behind a load balancer provides high availability, resiliency, and increased scalability for a single-datacenter deployment. Endpoints make their connections to a known URL, supported by the load balancer. , the load balancer spreads the incoming connections among the routers so that the connection and messaging load is distributed. If one of the routers fails, the endpoints connected to it will reconnect to the remaining active router. Figure 3.5. Router pair behind a load balancer For even greater scalability, you can use a larger number of routers, three or four for example. Each router connects directly to all of the others. 3.5. Router pair in a DMZ In this deployment architecture, the router network is providing a layer of protection and isolation between the clients in the outside world and the brokers backing an enterprise application. Figure 3.6. Router pair in a DMZ Important notes about the DMZ topology: Security for the connections within the deployment is separate from the security used for external clients. For example, your deployment might use a private Certificate Authority (CA) for internal security, issuing x.509 certificates to each router and broker for authentication, although external users might use a different, public CA. Inter-router connections between the enterprise and the DMZ are always established from the enterprise to the DMZ for security. Therefore, no connections are permitted from the outside into the enterprise. The AMQP protocol enables bi-directional communication after a connection is established, however. 3.6. Router pairs in different data centers You can use a more complex topology in a deployment of AMQ components that spans multiple locations. You can, for example, deploy a pair of load-balanced routers in each of four locations. You might include two backbone routers in the center to provide redundant connectivity between all locations. The following diagram is an example deployment spanning multiple locations. Figure 3.7. Multiple interconnected routers Revised on 2021-02-23 10:30:44 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/introducing_red_hat_amq_7/common_deployment_patterns |
Chapter 22. Shutting down and starting up the undercloud and overcloud | Chapter 22. Shutting down and starting up the undercloud and overcloud If you must perform maintenance on the undercloud and overcloud, you must shut down and start up the undercloud and overcloud nodes in a specific order to ensure minimal issues when your start your overcloud. Prerequisites A running undercloud and overcloud 22.1. Undercloud and overcloud shutdown order To shut down the Red Hat OpenStack Platform environment, you must shut down the overcloud and undercloud in the following order: Shut down instances on overcloud Compute nodes Shut down Compute nodes Stop all high availability and OpenStack Platform services on Controller nodes Shut down Ceph Storage nodes Shut down Controller nodes Shut down the undercloud 22.2. Shutting down instances on overcloud Compute nodes As a part of shutting down the Red Hat OpenStack Platform environment, shut down all instances on Compute nodes before shutting down the Compute nodes. Prerequisites An overcloud with active Compute services Procedure Log in to the undercloud as the stack user. Source the credentials file for your overcloud: View running instances in the overcloud: Stop each instance in the overcloud: Repeat this step for each instance until you stop all instances in the overcloud. 22.3. Shutting down Compute nodes As a part of shutting down the Red Hat OpenStack Platform environment, log in to and shut down each Compute node. Prerequisites Shut down all instances on the Compute nodes Procedure Log in as the root user to a Compute node. Shut down the node: Perform these steps for each Compute node until you shut down all Compute nodes. 22.4. Stopping services on Controller nodes As a part of shutting down the Red Hat OpenStack Platform environment, stop services on the Controller nodes before shutting down the nodes. This includes Pacemaker and systemd services. Prerequisites An overcloud with active Pacemaker services Procedure Log in as the root user to a Controller node. Stop the Pacemaker cluster. This command stops the cluster on all nodes. Wait until the Pacemaker services stop and check that the services stopped. Check the Pacemaker status: Check that no Pacemaker services are running in Podman: Stop the Red Hat OpenStack Platform services: Wait until the services stop and check that services are no longer running in Podman: 22.5. Shutting down Ceph Storage nodes As a part of shutting down the Red Hat OpenStack Platform environment, disable Ceph Storage services then log in to and shut down each Ceph Storage node. Prerequisites A healthy Ceph Storage cluster Ceph MON services are running on standalone Ceph MON nodes or on Controller nodes Procedure Log in as the root user to a node that runs Ceph MON services, such as a Controller node or a standalone Ceph MON node. Check the health of the cluster. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure that the status is HEALTH_OK . Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags for the cluster. In the following example, the podman commands set these flags through a Ceph MON container on a Controller node: Shut down each Ceph Storage node: Log in as the root user to a Ceph Storage node. Shut down the node: Perform these steps for each Ceph Storage node until you shut down all Ceph Storage nodes. Shut down any standalone Ceph MON nodes: Log in as the root user to a standalone Ceph MON node. Shut down the node: Perform these steps for each standalone Ceph MON node until you shut down all standalone Ceph MON nodes. Additional resources "What is the procedure to shutdown and bring up the entire ceph cluster?" 22.6. Shutting down Controller nodes As a part of shutting down the Red Hat OpenStack Platform environment, log in to and shut down each Controller node. Prerequisites Stop the Pacemaker cluster Stop all Red Hat OpenStack Platform services on the Controller nodes Procedure Log in as the root user to a Controller node. Shut down the node: Perform these steps for each Controller node until you shut down all Controller nodes. 22.7. Shutting down the undercloud As a part of shutting down the Red Hat OpenStack Platform environment, log in to the undercloud node and shut down the undercloud. Prerequisites A running undercloud Procedure Log in to the undercloud as the stack user. Shut down the undercloud: 22.8. Performing system maintenance After you completely shut down the undercloud and overcloud, perform any maintenance to the systems in your environment and then start up the undercloud and overcloud. 22.9. Undercloud and overcloud startup order To start the Red Hat OpenStack Platform environment, you must start the undercloud and overcloud in the following order: Start the undercloud. Start Controller nodes. Start Ceph Storage nodes. Start Compute nodes. Start instances on overcloud Compute nodes. 22.10. Starting the undercloud As a part of starting the Red Hat OpenStack Platform environment, power on the undercloud node, log in to the undercloud, and check the undercloud services. Prerequisites The undercloud is powered down. Procedure Power on the undercloud and wait until the undercloud boots. Verification Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Check the services on the undercloud: Validate the static inventory file named tripleo-ansible-inventory.yaml : Replace <inventory_file> with the name and location of the Ansible inventory file, for example, ~/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml . Note When you run a validation, the Reasons column in the output is limited to 79 characters. To view the validation result in full, view the validation log files. Check that all services and containers are active and healthy: Additional resources Using the validation framework 22.11. Starting Controller nodes As a part of starting the Red Hat OpenStack Platform environment, power on each Controller node and check the non-Pacemaker services on the node. Prerequisites The Controller nodes are powered down. Procedure Power on each Controller node. Verification Log in to each Controller node as the root user. Check the services on the Controller node: Only non-Pacemaker based services are running. Wait until the Pacemaker services start and check that the services started: Note If your environment uses Instance HA, the Pacemaker resources do not start until you start the Compute nodes or perform a manual unfence operation with the pcs stonith confirm <compute_node> command. You must run this command on each Compute node that uses Instance HA. 22.12. Starting Ceph Storage nodes As a part of starting the Red Hat OpenStack Platform environment, power on the Ceph MON and Ceph Storage nodes and enable Ceph Storage services. Prerequisites A powered down Ceph Storage cluster Ceph MON services are enabled on powered down standalone Ceph MON nodes or on powered on Controller nodes Procedure If your environment has standalone Ceph MON nodes, power on each Ceph MON node. Power on each Ceph Storage node. Log in as the root user to a node that runs Ceph MON services, such as a Controller node or a standalone Ceph MON node. Check the status of the cluster nodes. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure that each node is powered on and connected. Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags for the cluster. In the following example, the podman commands unset these flags through a Ceph MON container on a Controller node: Verification Check the health of the cluster. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure the status is HEALTH_OK . Additional resources "What is the procedure to shutdown and bring up the entire ceph cluster?" 22.13. Starting Compute nodes As a part of starting the Red Hat OpenStack Platform environment, power on each Compute node and check the services on the node. Prerequisites Powered down Compute nodes Procedure Power on each Compute node. Verification Log in to each Compute as the root user. Check the services on the Compute node: 22.14. Starting instances on overcloud Compute nodes As a part of starting the Red Hat OpenStack Platform environment, start the instances on on Compute nodes. Prerequisites An active overcloud with active nodes Procedure Log in to the undercloud as the stack user. Source the credentials file for your overcloud: View running instances in the overcloud: Start an instance in the overcloud: | [
"source ~/overcloudrc",
"openstack server list --all-projects",
"openstack server stop <INSTANCE>",
"shutdown -h now",
"pcs cluster stop --all",
"pcs status",
"podman ps --filter \"name=.*-bundle.*\"",
"systemctl stop 'tripleo_*'",
"podman ps",
"sudo podman exec -it ceph-mon-controller-0 ceph status",
"sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norecover sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance sudo podman exec -it ceph-mon-controller-0 ceph osd set nobackfill sudo podman exec -it ceph-mon-controller-0 ceph osd set nodown sudo podman exec -it ceph-mon-controller-0 ceph osd set pause",
"shutdown -h now",
"shutdown -h now",
"shutdown -h now",
"sudo shutdown -h now",
"source ~/stackrc",
"systemctl list-units 'tripleo_*'",
"validation run --group pre-introspection -i <inventory_file>",
"validation run --validation service-status --limit undercloud -i <inventory_file>",
"systemctl -t service",
"pcs status",
"sudo podman exec -it ceph-mon-controller-0 ceph status",
"sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout sudo podman exec -it ceph-mon-controller-0 ceph osd unset norecover sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance sudo podman exec -it ceph-mon-controller-0 ceph osd unset nobackfill sudo podman exec -it ceph-mon-controller-0 ceph osd unset nodown sudo podman exec -it ceph-mon-controller-0 ceph osd unset pause",
"sudo podman exec -it ceph-mon-controller-0 ceph status",
"systemctl -t service",
"source ~/overcloudrc",
"openstack server list --all-projects",
"openstack server start <INSTANCE>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_shutting-down-and-starting-up-the-undercloud-and-overcloud |
2.7. Setting Parameters | 2.7. Setting Parameters Set subsystem parameters by running the cgset command from a user account with permission to modify the relevant cgroup. For example, if cpuset is mounted to /cgroup/cpu_and_mem/ and the /cgroup/cpu_and_mem/group1 subdirectory exists, specify the CPUs to which this group has access with the following command: The syntax for cgset is: where: parameter is the parameter to be set, which corresponds to the file in the directory of the given cgroup. value is the value for the parameter. path_to_cgroup is the path to the cgroup relative to the root of the hierarchy . For example, to set the parameter of the root group (if the cpuacct subsystem is mounted to /cgroup/cpu_and_mem/ ), change to the /cgroup/cpu_and_mem/ directory, and run: Alternatively, because . is relative to the root group (that is, the root group itself) you could also run: Note, however, that / is the preferred syntax. Note Only a small number of parameters can be set for the root group (such as the cpuacct.usage parameter shown in the examples above). This is because a root group owns all of the existing resources, therefore, it would make no sense to limit all existing processes by defining certain parameters, for example the cpuset.cpu parameter. To set the parameter of group1 , which is a subgroup of the root group, run: A trailing slash after the name of the group (for example, cpuacct.usage=0 group1/ ) is optional. The values that you can set with cgset might depend on values set higher in a particular hierarchy. For example, if group1 is limited to use only CPU 0 on a system, you cannot set group1/subgroup1 to use CPUs 0 and 1, or to use only CPU 1. You can also use cgset to copy the parameters of one cgroup into another existing cgroup. For example: The syntax to copy parameters with cgset is: where: path_to_source_cgroup is the path to the cgroup whose parameters are to be copied, relative to the root group of the hierarchy. path_to_target_cgroup is the path to the destination cgroup, relative to the root group of the hierarchy. Ensure that any mandatory parameters for the various subsystems are set before you copy parameters from one group to another, or the command will fail. For more information on mandatory parameters, refer to Important . Alternative method To set parameters in a cgroup directly, insert values into the relevant subsystem pseudofile using the echo command. In the following example, the echo command inserts the value of 0-1 into the cpuset.cpus pseudofile of the cgroup group1 : With this value in place, the tasks in this cgroup are restricted to CPUs 0 and 1 on the system. | [
"cpu_and_mem]# cgset -r cpuset.cpus=0-1 group1",
"cgset -r parameter = value path_to_cgroup",
"cpu_and_mem]# cgset -r cpuacct.usage=0 /",
"cpu_and_mem]# cgset -r cpuacct.usage=0 .",
"cpu_and_mem]# cgset -r cpuacct.usage=0 group1",
"cpu_and_mem]# cgset --copy-from group1/ group2/",
"cgset --copy-from path_to_source_cgroup path_to_target_cgroup",
"~]# echo 0-1 > /cgroup/cpu_and_mem/group1/cpuset.cpus"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/Setting_Parameters |
Chapter 9. Logging | Chapter 9. Logging 9.1. Configuring logging The client uses the SLF4J API, enabling users to select a particular logging implementation based on their needs. For example, users can provide the slf4j-log4j binding to select the Log4J implementation. More details on SLF4J are available from its website . The client uses Logger names residing within the org.apache.qpid.jms hierarchy, which you can use to configure a logging implementation based on your needs. 9.2. Enabling protocol logging When debugging, it is sometimes useful to enable additional protocol trace logging from the Qpid Proton AMQP 1.0 library. There are two ways to achieve this. Set the environment variable (not the Java system property) PN_TRACE_FRM to 1 . When the variable is set to 1 , Proton emits frame logging to the console. Add the option amqp.traceFrames=true to your connection URI and configure the org.apache.qpid.jms.provider.amqp.FRAMES logger to log level TRACE . This adds a protocol tracer to Proton and includes the output in your logs. You can also configure the client to emit low-level tracing of input and output bytes. To enable this, add the option transport.traceBytes=true to your connection URI and configure the org.apache.qpid.jms.transports.netty.NettyTcpTransport logger to log level DEBUG . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_client/logging |
Chapter 6. Understanding Red Hat OpenShift Service on AWS development | Chapter 6. Understanding Red Hat OpenShift Service on AWS development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in Red Hat OpenShift Service on AWS. It also describes which approaches you might use for different kinds of applications and development requirements. 6.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 6.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 6.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. 6.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in Red Hat OpenShift Service on AWS or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in Red Hat OpenShift Service on AWS, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an Red Hat OpenShift Service on AWS cluster, but CRI-O is not yet supported as a standalone runtime outside of Red Hat OpenShift Service on AWS. 6.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the Red Hat OpenShift Service on AWS web UI. In the Developer perspective, navigate to the +Add view and in the Developer Catalog tile, view all of the available services in the Developer Catalog. Figure 6.2. Choose S2I base images for apps that need specific runtimes 6.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in Red Hat OpenShift Service on AWS are stored in Quay.io, including container images and the Operators that are used to deploy Red Hat OpenShift Service on AWS itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, Red Hat OpenShift Service on AWS itself includes a private container registry that is installed with Red Hat OpenShift Service on AWS and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from Red Hat OpenShift Service on AWS, while other credentials can be assigned to individuals. 6.3. Creating a Kubernetes manifest for Red Hat OpenShift Service on AWS While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as Red Hat OpenShift Service on AWS. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 6.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the Red Hat OpenShift Service on AWS cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 6.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable Red Hat OpenShift Service on AWS objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment. Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 6.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the Red Hat OpenShift Service on AWS web console: OperatorHub, which is available in each Red Hat OpenShift Service on AWS 4 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 6.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 6.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 6.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/architecture/understanding-development |
7.204. qt | 7.204. qt 7.204.1. RHBA-2012:1246 - qt bug fix update Updated qt packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The qt packages contain a software toolkit that simplifies the task of writing and maintaining GUI (Graphical User Interface) applications for the X Window System. Bug Fixes BZ# 678604 Prior to this update, the mouse pointer could, under certain circumstances, disappear when using the IRC client Konversation. This update modifies the underlying codes to reset the cursor on the parent and set the cursor on the new window handle. Now, the mouse pointer no longer disappears. BZ#847866 Prior to this update, the high precision coordinates of the QTabletEvent class failed to handle multiple Wacom devices. As a consequence, only the device that was loaded first worked correctly. This update modifies the underlying code so that multiple Wacom devices are handled as expected. All users of qt are advised to upgrade to these updated packages, which fix this bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/qt |
Chapter 5. Upgrading the Migration Toolkit for Containers | Chapter 5. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.18 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 4.5, and earlier versions, by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 5.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.18 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.18 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 5.2. Upgrading the Migration Toolkit for Containers to 1.8.0 To upgrade the Migration Toolkit for Containers to 1.8.0, complete the following steps. Procedure Determine subscription names and current channels to work with for upgrading by using one of the following methods: Determine the subscription names and channels by running the following command: USD oc -n openshift-migration get sub Example output NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0 Or return the subscription names and channels in JSON by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "package": "mtc-operator", "channel": "release-v1.7" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "package": "redhat-oadp-operator", "channel": "stable-1.0" } For each subscription, patch to move from the MTC 1.7 channel to the MTC 1.8 channel by running the following command: USD oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{"spec": {"channel": "release-v1.8"}}' Example output subscription.operators.coreos.com/mtc-operator patched 5.2.1. Upgrading OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0 To upgrade OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0, complete the following steps. Procedure For each subscription, patch the OADP operator from OADP 1.0 to OADP 1.2 by running the following command: USD oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{"spec": {"channel":"stable-1.2"}}' Note Sections indicating the user-specific returned NAME values that are used for the installation of MTC & OADP, respectively. Example output subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched Note The returned value will be similar to redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace , which is used in this example. If the installPlanApproval parameter is set to Automatic , the Operator Lifecycle Manager (OLM) begins the upgrade process. If the installPlanApproval parameter is set to Manual , you must approve each installPlan before the OLM begins the upgrades. Verification Verify that the OLM has completed the upgrades of OADP and MTC by running the following command: USD oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (."state"=="AtLatestKnown")' When a value of true is returned, verify the channel used for each subscription by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "channel": "release-v1.8" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "channel": "stable-1.2" } USD oc -n openshift-migration get csv Example output NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded 5.3. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform versions 4.2 to 4.5 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform versions 4.2 to 4.5 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 5.4. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration | [
"oc -n openshift-migration get sub",
"NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"package\": \"mtc-operator\", \"channel\": \"release-v1.7\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"package\": \"redhat-oadp-operator\", \"channel\": \"stable-1.0\" }",
"oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{\"spec\": {\"channel\": \"release-v1.8\"}}'",
"subscription.operators.coreos.com/mtc-operator patched",
"oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{\"spec\": {\"channel\":\"stable-1.2\"}}'",
"subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched",
"oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (.\"state\"==\"AtLatestKnown\")'",
"oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }'",
"{ \"name\": \"mtc-operator\", \"channel\": \"release-v1.8\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"channel\": \"stable-1.2\" }",
"Confirm that the `mtc-operator.v1.8.0` and `oadp-operator.v1.2.x` packages are installed by running the following command:",
"oc -n openshift-migration get csv",
"NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migration_toolkit_for_containers/upgrading-mtc |
Chapter 111. AclRuleTopicResource schema reference | Chapter 111. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Property type Description type string Must be topic . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-aclruletopicresource-reference |
Chapter 4. Upgrade Quay Bridge Operator | Chapter 4. Upgrade Quay Bridge Operator To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel. When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required: You need to create a new QuayIntegration custom resource. This can be completed in the Web Console or from the command line. upgrade-quay-integration.yaml - apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com 1 Make sure that the clusterID matches the value for the existing QuayIntegration resource. Create the new QuayIntegration custom resource: USD oc create -f upgrade-quay-integration.yaml Delete the old QuayIntegration custom resource. Delete the old mutatingwebhookconfigurations : USD oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator 4.1. Upgrading a geo-replication deployment of the Red Hat Quay Operator Use the following procedure to upgrade your geo-replicated Red Hat Quay Operator. Important When upgrading geo-replicated Red Hat Quay Operator deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay Operator deployment before upgrading. Procedure This procedure assumes that you are running the Red Hat Quay Operator on three (or more) systems. For this procedure, we will assume three systems named System A, System B, and System C . System A will serve as the primary system in which the Red Hat Quay Operator is deployed. On System B and System C, scale down your Red Hat Quay Operator deployment. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair (if it is managed). Use the following quayregistry.yaml file as a reference: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay, Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage Note You must keep the Red Hat Quay Operator running on System A. Do not update the quayregistry.yaml file on System A. Wait for the registry-quay-app , registry-quay-mirror , and registry-clair-app pods to disappear. Enter the following command to check their status: oc get pods -n <quay-namespace> Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-config-editor-6dfdcfc44f-hlvwm 1/1 Running 0 73s quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m On System A, initiate a Red Hat Quay Operator upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators . For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator . After the new Red Hat Quay Operator is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI: In the OpenShift console, navigate to Operators Installed Operators , and click the Registry Endpoint link. Important Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay Operator on System B and on System C until the UI is available on System A. After confirming that the update has properly worked on System A, initiate the Red Hat Quay Operator on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted. Note Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start. | [
"- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com",
"oc create -f upgrade-quay-integration.yaml",
"oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"get pods -n <quay-namespace>",
"quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-config-editor-6dfdcfc44f-hlvwm 1/1 Running 0 73s quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/upgrade_red_hat_quay/qbo-operator-upgrade |
Chapter 25. OpenShift by Red Hat | Chapter 25. OpenShift by Red Hat OpenShift by Red Hat is a Platform as a Service (PaaS) that enables developers to build and deploy web applications. OpenShift provides a wide selection of programming languages and frameworks including Java, Ruby, and PHP. It also provides integrated developer tools to support the application life cycle, including Eclipse integration, JBoss Developer Studio, and Jenkins. OpenShift uses an open source ecosystem to provide a platform for mobile applications, database services, and more. [24] In Red Hat Enterprise Linux, the openshift-clients package provides the OpenShift client tools. Enter the following command to see if it is installed: If the openshift-clients package is not installed, see the OpenShift Enterprise Client Tools Installation Guide and OpenShift Online Client Tools Installation Guide for detailed information on the OpenShift client tools installation process. Important Previously, the rhc package provided the OpenShift client tools. With the latest OpenShift versions, this package has been deprecated and is no longer supported by Red Hat. Hence, after OpenShift version 2, the rhc package is replaced with the openshift-clients package that provides the OpenShift client tools used for supported OpenShift versions. 25.1. OpenShift and SELinux SELinux provides better security control over applications that use OpenShift because all processes are labeled according to the SELinux policy. Therefore, SELinux protects OpenShift from possible malicious attacks within different gears running on the same node. See the Dan Walsh's presentation for more information about SELinux and OpenShift. [24] To learn more about OpenShift, see Product Documentation for OpenShift Container Platform and Product Documentation for OpenShift Online . | [
"~]USD rpm -q openshift-clients package openshift-clients is not installed"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-openshift |
Chapter 48. EntityUserOperatorSpec schema reference | Chapter 48. EntityUserOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityUserOperatorSpec schema properties Configures the User Operator. 48.1. logging The User Operator has a configurable logger: rootLogger.level The User Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.userOperator field of the Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the rootLogger.level . You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.uop.name: io.strimzi.operator.user 1 logger.uop.level: DEBUG 2 logger.abstractcache.name: io.strimzi.operator.user.operator.cache.AbstractCache 3 logger.abstractcache.level: TRACE 4 logger.jetty.level: DEBUG 5 # ... 1 Creates a logger for the user package. 2 Sets the logging level for the user package. 3 Creates a logger for the AbstractCache class. 4 Sets the logging level for the AbstractCache class. 5 Changes the logging level for the default jetty logger. The jetty logger is part of the logging configuration provided with AMQ Streams. By default, it is set to INFO . Note When investigating an issue with the operator, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 48.2. EntityUserOperatorSpec schema properties Property Description watchedNamespace The namespace the User Operator should watch. string image The image to use for the User Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds The zookeeperSessionTimeoutSeconds property has been deprecated. This property has been deprecated because ZooKeeper is not used anymore by the User Operator. Timeout for the ZooKeeper session. integer secretPrefix The prefix that will be added to the KafkaUser name to be used as the Secret name. string livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.uop.name: io.strimzi.operator.user 1 logger.uop.level: DEBUG 2 logger.abstractcache.name: io.strimzi.operator.user.operator.cache.AbstractCache 3 logger.abstractcache.level: TRACE 4 logger.jetty.level: DEBUG 5 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-EntityUserOperatorSpec-reference |
23.6. Memory Backing | 23.6. Memory Backing Memory backing allows the hypervisor to properly manage large pages within the guest virtual machine. <domain> ... <memoryBacking> <hugepages> <page size="1" unit="G" nodeset="0-3,5"/> <page size="2" unit="M" nodeset="4"/> </hugepages> <nosharepages/> <locked/> </memoryBacking> ... </domain> Figure 23.8. Memory backing For detailed information on memoryBacking elements, see the libvirt upstream documentation . | [
"<domain> <memoryBacking> <hugepages> <page size=\"1\" unit=\"G\" nodeset=\"0-3,5\"/> <page size=\"2\" unit=\"M\" nodeset=\"4\"/> </hugepages> <nosharepages/> <locked/> </memoryBacking> </domain>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-memory_backing |
Chapter 3. Using the Cluster Samples Operator with an alternate registry | Chapter 3. Using the Cluster Samples Operator with an alternate registry You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. 3.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.1.1. Preparing the mirror host Before you create the mirror registry, you must prepare the mirror host. 3.1.2. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Prerequisites You configured a mirror registry to use in your disconnected environment. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.3. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your server, such as x86_64 or aarch64 : USD ARCHITECTURE=<server_architecture> Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> \ --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 3.4.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure. | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<server_architecture>",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> \\ --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io",
"oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/samples-operator-alt-registry |
10.5.9.4. MinSpareServers and MaxSpareServers | 10.5.9.4. MinSpareServers and MaxSpareServers These values are only used with the prefork MPM. They adjust how the Apache HTTP Server dynamically adapts to the perceived load by maintaining an appropriate number of spare server processes based on the number of incoming requests. The server checks the number of servers waiting for a request and kills some if there are more than MaxSpareServers or creates some if the number of servers is less than MinSpareServers . The default MinSpareServers value is 5 ; the default MaxSpareServers value is 20 . These default settings should be appropriate for most situations. Be careful not to increase the MinSpareServers to a large number as doing so creates a heavy processing load on the server even when traffic is light. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-apache-minmaxspareservers |
Chapter 8. Operator SDK | Chapter 8. Operator SDK 8.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. See Developing Operators for full documentation on the Operator SDK. Note OpenShift Container Platform 4.13 supports Operator SDK 1.28.0. 8.1.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.13 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.28.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.28.0-ocp", ... 8.1.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.13 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.28.0-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.28.0-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.28.0-ocp", ... 8.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] See Developing Operators for full documentation on the Operator SDK. 8.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 8.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 8.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 8.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 8.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 8.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 8.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 8.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 8.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 8.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 8.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 8.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 8.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 8.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. Additional resources See Bundling an Operator and deploying with Operator Lifecycle Manager for a full procedure that includes using the make bundle command to call the generate bundle subcommand. 8.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 8.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 8.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 8.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plugin. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 8.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 8.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 8.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 8.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". Additional resources See Operator group membership for details on possible install modes. 8.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 8.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". 8.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 8.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . --pod-security <security_context> Option to run scorecard with the specified security context. Allowed values include restricted and legacy . The default value is legacy . [1] -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". Additional resources See Validating Operators using the scorecard tool for details about running the scorecard tool. | [
"tar xvf operator-sdk-v1.28.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.28.0-ocp\",",
"tar xvf operator-sdk-v1.28.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.28.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.28.0-ocp\",",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/operator-sdk |
Chapter 16. Troubleshooting AMQ Interconnect | Chapter 16. Troubleshooting AMQ Interconnect You can use the AMQ Interconnect logs to diagnose and troubleshoot error and performance issues with the routers in your router network. 16.1. Viewing log entries You may need to view log entries to diagnose errors, performance problems, and other important issues. A log entry consists of an optional timestamp, the logging module, the logging level, and the log message. Procedure Do one of the following: View log entries on the console. By default, events are logged to the console, and you can view them there. However, if the output attribute is set for a particular logging module, then you can find those log entries in the specified location ( stderr , syslog , or a file). Use the qdstat --log command to view recent log entries. You can use the --limit parameter to limit the number of log entries that are displayed. For more information about qdstat , see qdstat man page . This example displays the last three log entries for Router.A : Note vhost entries are only populated if multiTenant is set to true in the /etc/qpid-dispatch/qdrouterd.conf configuration file. Additional resources For more information about configuring logging modules, see Section 11.2, "Configuring default logging" . 16.2. Troubleshooting using logs You can use AMQ Interconnect log entries to help diagnose error and performance issues with the routers in your network. Example 16.1. Troubleshooting connections and links In this example, ROUTER logs show the lifecycle of a connection and a link that is associated with it. 1 The connection is opened. Each connection has a unique ID ( C1 ). The log also shows some information about the connection. 2 A link is attached over the connection. The link is identified with a unique ID ( L6 ). The log also shows the direction of the link, and the source and target addresses. 3 The link is detached. The log shows the link's terminal statistics. 4 The connection is closed. Note If necessary, you can use qdmanage to enable protocol-level trace logging for a particular connection. You can use this to trace the AMQP frames. For example: Example 16.2. Troubleshooting the network topology In this example, on Router.A , the ROUTER_HELLO logs show that it is connected to Router.B , and that Router.B is connected to Router.A and Router.C : 1 Router.A received a Hello message from Router.B , which can see Router.A and Router.C . 2 Router.A sent a Hello message to Router.B , which is the only router it can see. On Router.B , the ROUTER_HELLO log shows the same router topology from a different perspective: 1 Router.B sent a Hello message to Router.A and Router.C . 2 Router.B received a Hello message from Router.A , which can only see Router.B . 3 Router.B received a Hello message from Router.C , which can only see Router.B . Example 16.3. Tracing the link state between routers Periodically, each router sends a Link State Request (LSR) to the other routers and receives a Link State Update (LSU) with the requested information. Exchanging the above information, each router can compute the hops in the topology, and the related costs. In this example, the ROUTER_LS logs show the RA, LSR, and LSU messages sent between three routers: 1 Router.A sent LSR requests and an RA advertisement to the other routers on the network. 2 Router.A received an LSU from Router.B , which has two peers: Router.A , and Router.C (with a cost of 1 ). 3 Router.A received an LSR from both Router.B and Router.C , and replied with an LSU. 4 Router.A received an LSU from Router.C , which only has one peer: Router.B (with a cost of 1 ). 5 After the LSR and LSU messages are exchanged, Router.A computed the router topology with the related costs. Example 16.4. Tracing the state of mobile addresses attached to a router In this example, the ROUTER_MA logs show the Mobile Address Request (MAR) and Mobile Address Update (MAU) messages sent between three routers: 1 Router.A sent MAU messages to the other routers in the network to notify them about the addresses added for my_queue and my_queue_wp . 2 Router.A received a MAR message in response from Router.C . 3 Router.A received another MAR message in response from Router.B . 4 Router.C sent a MAU message to notify the other routers that it added and address for my_test . 5 Router.C sent another MAU message to notify the other routers that it deleted the address for my_test (because the receiver is detached). Example 16.5. Finding information about messages sent and received by a router In this example, the MESSAGE logs show that Router.A has sent and received some messages related to the Hello protocol, and sent and received some other messages on a link for a mobile address: Example 16.6. Tracking configuration changes to a router In this example, the AGENT logs show that on Router.A , address , linkRoute , and autoLink entities were added to the router's configuration file. When the router was started, the AGENT module applied these changes, and they are now viewable in the log: Example 16.7. Troubleshooting policy and vhost access rules In this example, the POLICY logs show that this router has no limits on maximum connections, and the default application policy is disabled: Example 16.8. Diagnosing errors In this example, the ERROR logs show that the router failed to start when an incorrect path was specified for the router's configuration file: Additional resources For more information about logging modules, see Section 11.1, "Logging modules" . | [
"qdstat --log --limit=3 -r ROUTER.A Wed Jun 7 17:49:32 2019 ROUTER (none) Core action 'link_deliver' Wed Jun 7 17:49:32 2019 ROUTER (none) Core action 'send_to' Wed Jun 7 17:49:32 2019 SERVER (none) [2]:0 -> @flow(19) [next-incoming-id=1, incoming-window=61, next-outgoing-id=0, outgoing-window=2147483647, handle=0, delivery-count=1, link-credit=250, drain=false]",
"2019-04-05 14:54:38.037248 -0400 ROUTER (info) [C1] Connection Opened: dir=in host=127.0.0.1:55440 vhost= encrypted=no auth=no user=anonymous container_id=95e55424-6c0a-4a5c-8848-65a3ea5cc25a props= 1 2019-04-05 14:54:38.038137 -0400 ROUTER (info) [C1][L6] Link attached: dir=in source={<none> expire:sess} target={USDmanagement expire:sess} 2 2019-04-05 14:54:38.041103 -0400 ROUTER (info) [C1][L6] Link lost: del=1 presett=0 psdrop=0 acc=1 rej=0 rel=0 mod=0 delay1=0 delay10=0 3 2019-04-05 14:54:38.041154 -0400 ROUTER (info) [C1] Connection Closed 4",
"qdmanage update --type=connection --id=C1 enableProtocolTrace=true",
"Tue Jun 7 13:50:21 2016 ROUTER_HELLO (trace) RCVD: HELLO(id=Router.B area=0 inst=1465307413 seen=['Router.A', 'Router.C']) 1 Tue Jun 7 13:50:21 2016 ROUTER_HELLO (trace) SENT: HELLO(id=Router.A area=0 inst=1465307416 seen=['Router.B']) 2 Tue Jun 7 13:50:22 2016 ROUTER_HELLO (trace) RCVD: HELLO(id=Router.B area=0 inst=1465307413 seen=['Router.A', 'Router.C']) Tue Jun 7 13:50:22 2016 ROUTER_HELLO (trace) SENT: HELLO(id=Router.A area=0 inst=1465307416 seen=['Router.B'])",
"Tue Jun 7 13:50:18 2016 ROUTER_HELLO (trace) SENT: HELLO(id=Router.B area=0 inst=1465307413 seen=['Router.A', 'Router.C']) 1 Tue Jun 7 13:50:18 2016 ROUTER_HELLO (trace) RCVD: HELLO(id=Router.A area=0 inst=1465307416 seen=['Router.B']) 2 Tue Jun 7 13:50:19 2016 ROUTER_HELLO (trace) RCVD: HELLO(id=Router.C area=0 inst=1465307411 seen=['Router.B']) 3",
"Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: LSR(id=Router.A area=0) to: Router.C Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: LSR(id=Router.A area=0) to: Router.B Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: RA(id=Router.A area=0 inst=1465308600 ls_seq=1 mobile_seq=1) 1 Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) RCVD: LSU(id=Router.B area=0 inst=1465308595 ls_seq=2 ls=LS(id=Router.B area=0 ls_seq=2 peers={'Router.A': 1L, 'Router.C': 1L})) 2 Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) RCVD: LSR(id=Router.B area=0) Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: LSU(id=Router.A area=0 inst=1465308600 ls_seq=1 ls=LS(id=Router.A area=0 ls_seq=1 peers={'Router.B': 1})) Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) RCVD: RA(id=Router.C area=0 inst=1465308592 ls_seq=1 mobile_seq=0) Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: LSR(id=Router.A area=0) to: Router.C Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) RCVD: LSR(id=Router.C area=0) 3 Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) SENT: LSU(id=Router.A area=0 inst=1465308600 ls_seq=1 ls=LS(id=Router.A area=0 ls_seq=1 peers={'Router.B': 1})) Tue Jun 7 14:10:02 2016 ROUTER_LS (trace) RCVD: LSU(id=Router.C area=0 inst=1465308592 ls_seq=1 ls=LS(id=Router.C area=0 ls_seq=1 peers={'Router.B': 1L})) 4 Tue Jun 7 14:10:03 2016 ROUTER_LS (trace) Computed next hops: {'Router.C': 'Router.B', 'Router.B': 'Router.B'} 5 Tue Jun 7 14:10:03 2016 ROUTER_LS (trace) Computed costs: {'Router.C': 2L, 'Router.B': 1} Tue Jun 7 14:10:03 2016 ROUTER_LS (trace) Computed valid origins: {'Router.C': [], 'Router.B': []}",
"Tue Jun 7 14:27:20 2016 ROUTER_MA (trace) SENT: MAU(id=Router.A area=0 mobile_seq=1 add=['Cmy_queue', 'Dmy_queue', 'M0my_queue_wp'] del=[]) 1 Tue Jun 7 14:27:21 2016 ROUTER_MA (trace) RCVD: MAR(id=Router.C area=0 have_seq=0) 2 Tue Jun 7 14:27:21 2016 ROUTER_MA (trace) SENT: MAU(id=Router.A area=0 mobile_seq=1 add=['Cmy_queue', 'Dmy_queue', 'M0my_queue_wp'] del=[]) Tue Jun 7 14:27:22 2016 ROUTER_MA (trace) RCVD: MAR(id=Router.B area=0 have_seq=0) 3 Tue Jun 7 14:27:22 2016 ROUTER_MA (trace) SENT: MAU(id=Router.A area=0 mobile_seq=1 add=['Cmy_queue', 'Dmy_queue', 'M0my_queue_wp'] del=[]) Tue Jun 7 14:27:39 2016 ROUTER_MA (trace) RCVD: MAU(id=Router.C area=0 mobile_seq=1 add=['M0my_test'] del=[]) 4 Tue Jun 7 14:27:51 2016 ROUTER_MA (trace) RCVD: MAU(id=Router.C area=0 mobile_seq=2 add=[] del=['M0my_test']) 5",
"Tue Jun 7 14:36:54 2016 MESSAGE (trace) Sending Message{to='amqp:/_topo/0/Router.B/qdrouter' body='\\d1\\00\\00\\00\\1b\\00\\00\\00\\04\\a1\\02id\\a1\\08R'} on link qdlink.p9XmBm19uDqx50R Tue Jun 7 14:36:54 2016 MESSAGE (trace) Received Message{to='amqp:/_topo/0/Router.A/qdrouter' body='\\d1\\00\\00\\00\\8e\\00\\00\\00 \\a1\\06ls_se'} on link qdlink.phMsJOq7YaFsGAG Tue Jun 7 14:36:54 2016 MESSAGE (trace) Received Message{ body='\\d1\\00\\00\\00\\10\\00\\00\\00\\02\\a1\\08seque'} on link qdlink.FYHqBX+TtwXZHfV Tue Jun 7 14:36:54 2016 MESSAGE (trace) Sending Message{ body='\\d1\\00\\00\\00\\10\\00\\00\\00\\02\\a1\\08seque'} on link qdlink.yU1tnPs5KbMlieM Tue Jun 7 14:36:54 2016 MESSAGE (trace) Sending Message{to='amqp:/_local/qdhello' body='\\d1\\00\\00\\00G\\00\\00\\00\\08\\a1\\04seen\\d0'} on link qdlink.p9XmBm19uDqx50R Tue Jun 7 14:36:54 2016 MESSAGE (trace) Sending Message{to='amqp:/_topo/0/Router.C/qdrouter' body='\\d1\\00\\00\\00\\1b\\00\\00\\00\\04\\a1\\02id\\a1\\08R'} on link qdlink.p9XmBm19uDqx50R",
"Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: ConnectorEntity(addr=127.0.0.1, allowRedirect=True, cost=1, host=127.0.0.1, identity=connector/127.0.0.1:5672:BROKER, idleTimeoutSeconds=16, maxFrameSize=65536, name=BROKER, port=5672, role=route-container, stripAnnotations=both, type=org.apache.qpid.dispatch.connector, verifyHostname=True) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigAddressEntity(distribution=closest, identity=router.config.address/0, name=router.config.address/0, prefix=my_address, type=org.apache.qpid.dispatch.router.config.address, waypoint=False) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigAddressEntity(distribution=balanced, identity=router.config.address/1, name=router.config.address/1, prefix=my_queue_wp, type=org.apache.qpid.dispatch.router.config.address, waypoint=True) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigLinkrouteEntity(connection=BROKER, direction=in, distribution=linkBalanced, identity=router.config.linkRoute/0, name=router.config.linkRoute/0, prefix=my_queue, type=org.apache.qpid.dispatch.router.config.linkRoute) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigLinkrouteEntity(connection=BROKER, direction=out, distribution=linkBalanced, identity=router.config.linkRoute/1, name=router.config.linkRoute/1, prefix=my_queue, type=org.apache.qpid.dispatch.router.config.linkRoute) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigAutolinkEntity(address=my_queue_wp, connection=BROKER, direction=in, identity=router.config.autoLink/0, name=router.config.autoLink/0, type=org.apache.qpid.dispatch.router.config.autoLink) Tue Jun 7 15:07:32 2016 AGENT (debug) Add entity: RouterConfigAutolinkEntity(address=my_queue_wp, connection=BROKER, direction=out, identity=router.config.autoLink/1, name=router.config.autoLink/1, type=org.apache.qpid.dispatch.router.config.autoLink)",
"Tue Jun 7 15:07:32 2016 POLICY (info) Policy configured maximumConnections: 0, policyFolder: '', access rules enabled: 'false' Tue Jun 7 15:07:32 2016 POLICY (info) Policy fallback defaultApplication is disabled",
"qdrouterd --conf my_config Wed Jun 15 09:53:28 2016 ERROR (error) Python: Exception: Cannot load configuration file my_config: [Errno 2] No such file or directory: 'my_config' Wed Jun 15 09:53:28 2016 ERROR (error) Traceback (most recent call last): File \"/usr/lib/qpid-dispatch/python/qpid_dispatch_internal/management/config.py\", line 155, in configure_dispatch config = Config(filename) File \"/usr/lib/qpid-dispatch/python/qpid_dispatch_internal/management/config.py\", line 41, in __init__ self.load(filename, raw_json) File \"/usr/lib/qpid-dispatch/python/qpid_dispatch_internal/management/config.py\", line 123, in load with open(source) as f: Exception: Cannot load configuration file my_config: [Errno 2] No such file or directory: 'my_config' Wed Jun 15 09:53:28 2016 MAIN (critical) Router start-up failed: Python: Exception: Cannot load configuration file my_config: [Errno 2] No such file or directory: 'my_config' qdrouterd: Python: Exception: Cannot load configuration file my_config: [Errno 2] No such file or directory: 'my_config'"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/troubleshooting-router-rhel |
Chapter 5. OLMConfig [operators.coreos.com/v1] | Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. packageServerSyncInterval string PackageServerSyncInterval is used to define the sync interval for packagerserver pods. Packageserver pods periodically check the status of CatalogSources; this specifies the period using duration format (e.g. "60m"). For this parameter, only hours ("h"), minutes ("m"), and seconds ("s") may be specified. When not specified, the period defaults to the value specified within the packageserver. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs HTTP method DELETE Description delete collection of OLMConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body OLMConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method DELETE Description delete an OLMConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body OLMConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method GET Description read status of the specified OLMConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/olmconfig-operators-coreos-com-v1 |
Chapter 3. Logging information for Red Hat Quay | Chapter 3. Logging information for Red Hat Quay Obtaining log information using can be beneficial in various ways for managing, monitoring, and troubleshooting applications running in containers or pods. Some of the reasons why obtaining log information is valuable include the following: Debugging and Troubleshooting : Logs provide insights into what's happening inside the application, allowing developers and system administrators to identify and resolve issues. By analyzing log messages, one can identify errors, exceptions, warnings, or unexpected behavior that might occur during the application's execution. Performance Monitoring : Monitoring logs helps to track the performance of the application and its components. Monitoring metrics like response times, request rates, and resource utilization can help in optimizing and scaling the application to meet the demand. Security Analysis : Logs can be essential in auditing and detecting potential security breaches. By analyzing logs, suspicious activities, unauthorized access attempts, or any abnormal behavior can be identified, helping in detecting and responding to security threats. Tracking User Behavior : In some cases, logs can be used to track user activities and behavior. This is particularly important for applications that handle sensitive data, where tracking user actions can be useful for auditing and compliance purposes. Capacity Planning : Log data can be used to understand resource utilization patterns, which can aid in capacity planning. By analyzing logs, one can identify peak usage periods, anticipate resource needs, and optimize infrastructure accordingly. Error Analysis : When errors occur, logs can provide valuable context about what happened leading up to the error. This can help in understanding the root cause of the issue and facilitating the debugging process. Verification of Deployment : Logging during the deployment process can help verify if the application is starting correctly and if all components are functioning as expected. Continuous Integration/Continuous Deployment (CI/CD) : In CI/CD pipelines, logging is essential to capture build and deployment statuses, allowing teams to monitor the success or failure of each stage. 3.1. Obtaining log information for Red Hat Quay Log information can be obtained for all types of Red Hat Quay deployments, including geo-replication deployments, standalone deployments, and Operator deployments. Log information can also be obtained for mirrored repositories. It can help you troubleshoot authentication and authorization issues, and object storage issues. After you have obtained the necessary log information, you can search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Use the following procedure to obtain logs for your Red Hat Quay deployment. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command to view the logs: USD oc logs <quay_pod_name> If you are on a standalone Red Hat Quay deployment, enter the following command: USD podman logs <quay_container_name> Example output ... gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'} ... 3.2. Examining verbose logs Red Hat Quay does not have verbose logs, however, with the following procedures, you can obtain a detailed status check of your database pod or container. Note Additional debugging information can be returned if you have deployed Red Hat Quay in one of the following ways: You have deployed Red Hat Quay by passing in the DEBUGLOG=true variable. You have deployed Red Hat Quay with LDAP authentication enabled by passing in the DEBUGLOG=true and USERS_DEBUG=1 variables. You have configured Red Hat Quay on OpenShift Container Platform by updating the QuayRegistry resource to include DEBUGLOG=true . For more information, see "Running Red Hat Quay in debug mode". Procedure Enter the following commands to examine verbose database logs. If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following commands: USD oc logs <quay_pod_name> -- USD oc logs <quay_pod_name> -- -c <container_name> USD oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host If you are using a standalone deployment of Red Hat Quay, enter the following commands: USD podman logs <quay_container_id> -- USD podman logs <quay_container_id> -- -c <container_name> USD podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host | [
"oc logs <quay_pod_name>",
"podman logs <quay_container_name>",
"gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'}",
"oc logs <quay_pod_name> --previous",
"oc logs <quay_pod_name> --previous -c <container_name>",
"oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host",
"podman logs <quay_container_id> --previous",
"podman logs <quay_container_id> --previous -c <container_name>",
"podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/troubleshooting_red_hat_quay/obtaining-quay-logs |
Chapter 21. Control Bus | Chapter 21. Control Bus Only producer is supported The Control Bus from the EIP patterns allows for the integration system to be monitored and managed from within the framework. Use a Control Bus to manage an enterprise integration system. The Control Bus uses the same messaging mechanism used by the application data, but uses separate channels to transmit data that is relevant to the management of components involved in the message flow. In Camel you can manage and monitor using JMX, or by using a Java API from the CamelContext , or from the org.apache.camel.api.management package, or use the event notifier which has an example here. The ControlBus component provides easy management of Camel applications based on the Control Bus EIP pattern. For example, by sending a message to an Endpoint you can control the lifecycle of routes, or gather performance statistics. Where command can be any string to identify which type of command to use. 21.1. Commands Command Description route To control routes using the routeId and action parameter. language Allows you to specify a to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. 21.2. Dependencies When using controlbus with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency> 21.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 21.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 21.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 21.4. Component Options The Control Bus component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 21.5. Endpoint Options The Control Bus endpoint is configured using URI syntax: with the following path and query parameters: 21.5.1. Path Parameters (2 parameters) Name Description Default Type command (producer) Required Command can be either route or language. Enum values: route language String language (producer) Allows you to specify the name of a Language to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. Enum values: bean constant el exchangeProperty file groovy header jsonpath mvel ognl ref simple spel sql terser tokenize xpath xquery xtokenize Language 21.5.1.1. Query Parameters (6 parameters) Name Description Default Type action (producer) To denote an action that can be either: start, stop, or status. To either start or stop a route, or to get the status of the route as output in the message body. You can use suspend and resume from Camel 2.11.1 onwards to either suspend or resume a route. And from Camel 2.11.1 onwards you can use stats to get performance statics returned in XML format; the routeId option can be used to define which route to get the performance stats for, if routeId is not defined, then you get statistics for the entire CamelContext. The restart action will restart the route. Enum values: start stop suspend resume restart status stats String async (producer) Whether to execute the control bus task asynchronously. Important: If this option is enabled, then any result from the task is not set on the Exchange. This is only possible if executing tasks synchronously. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean loggingLevel (producer) Logging level used for logging when task is done, or if any exceptions occurred during processing the task. Enum values: TRACE DEBUG INFO WARN ERROR OFF INFO LoggingLevel restartDelay (producer) The delay in millis to use when restarting a route. 1000 int routeId (producer) To specify a route by its id. The special keyword current indicates the current route. String 21.6. Using route command The route command allows you to do common tasks on a given route very easily, for example to start a route, you can send an empty message to this endpoint: template.sendBody("controlbus:route?routeId=foo&action=start", null); To get the status of the route, you can do: String status = template.requestBody("controlbus:route?routeId=foo&action=status", null, String.class); 21.7. Getting performance statistics This requires JMX to be enabled (is by default) then you can get the performance statistics per route, or for the CamelContext. For example to get the statistics for a route named foo, we can do: String xml = template.requestBody("controlbus:route?routeId=foo&action=stats", null, String.class); The returned statistics is in XML format. Its the same data you can get from JMX with the dumpRouteStatsAsXml operation on the ManagedRouteMBean . To get statistics for the entire CamelContext you just omit the routeId parameter as shown below: String xml = template.requestBody("controlbus:route?action=stats", null, String.class); 21.8. Using Simple language You can use the Simple language with the control bus, for example to stop a specific route, you can send a message to the "controlbus:language:simple" endpoint containing the following message: template.sendBody("controlbus:language:simple", "USD{camelContext.getRouteController().stopRoute('myRoute')}"); As this is a void operation, no result is returned. However, if you want the route status you can do: String status = template.requestBody("controlbus:language:simple", "USD{camelContext.getRouteStatus('myRoute')}", String.class); It's easier to use the route command to control lifecycle of routes. The language command allows you to execute a language script that has stronger powers such as Groovy or to some extend the Simple language. For example to shutdown Camel itself you can do: template.sendBody("controlbus:language:simple?async=true", "USD{camelContext.stop()}"); We use async=true to stop Camel asynchronously as otherwise we would be trying to stop Camel while it was in-flight processing the message we sent to the control bus component. Note You can also use other languages such as Groovy , etc. 21.9. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.controlbus.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.controlbus.enabled Whether to enable auto configuration of the controlbus component. This is enabled by default. Boolean camel.component.controlbus.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"controlbus:command[?options]",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency>",
"controlbus:command:language",
"template.sendBody(\"controlbus:route?routeId=foo&action=start\", null);",
"String status = template.requestBody(\"controlbus:route?routeId=foo&action=status\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?routeId=foo&action=stats\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?action=stats\", null, String.class);",
"template.sendBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteController().stopRoute('myRoute')}\");",
"String status = template.requestBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteStatus('myRoute')}\", String.class);",
"template.sendBody(\"controlbus:language:simple?async=true\", \"USD{camelContext.stop()}\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-control-bus-component-starter |
3.4. Resource Constraints | 3.4. Resource Constraints You can determine the behavior of a resource in a cluster by configuring constraints . You can configure the following categories of constraints: location constraints - A location constraint determines which nodes a resource can run on. order constraints - An order constraint determines the order in which the resources run. colocation constraints - A colocation constraint determines where resources will be placed relative to other resources. As a shorthand for configuring a set of constraints that will locate a set of resources together and ensure that the resources start sequentially and stop in reverse order, Pacemaker supports the concept of resource groups. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-resourceconstraint-haao |
Appendix B. Using Red Hat Maven repositories | Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat | [
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/using_red_hat_maven_repositories |
Chapter 19. All configuration | Chapter 19. All configuration 19.1. Cache Value cache 🛠 Defines the cache mechanism for high-availability. By default in production mode, a ispn cache is used to create a cluster between multiple server nodes. By default in development mode, a local cache disables clustering and is intended for development and testing purposes. CLI: --cache Env: KC_CACHE ispn (default), local cache-config-file 🛠 Defines the file from which cache configuration should be loaded from. The configuration file is relative to the conf/ directory. CLI: --cache-config-file Env: KC_CACHE_CONFIG_FILE cache-stack 🛠 Define the default stack to use for cluster communication and node discovery. This option only takes effect if cache is set to ispn . Default: udp. CLI: --cache-stack Env: KC_CACHE_STACK tcp , udp , kubernetes , ec2 , azure , google 19.2. Database Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-driver The fully qualified class name of the JDBC driver. If not set, a default driver is set accordingly to the chosen database. CLI: --db-driver Env: KC_DB_DRIVER db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-pool-initial-size The initial size of the connection pool. CLI: --db-pool-initial-size Env: KC_DB_POOL_INITIAL_SIZE db-pool-max-size The maximum size of the connection pool. CLI: --db-pool-max-size Env: KC_DB_POOL_MAX_SIZE 100 (default) db-pool-min-size The minimal size of the connection pool. CLI: --db-pool-min-size Env: KC_DB_POOL_MIN_SIZE db-schema The database schema to be used. CLI: --db-schema Env: KC_DB_SCHEMA db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-url-database Sets the database name of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-database Env: KC_DB_URL_DATABASE db-url-host Sets the hostname of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-host Env: KC_DB_URL_HOST db-url-port Sets the port of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-port Env: KC_DB_URL_PORT db-url-properties Sets the properties of the default JDBC URL of the chosen vendor. Make sure to set the properties accordingly to the format expected by the database vendor, as well as appending the right character at the beginning of this property value. If the db-url option is set, this option is ignored. CLI: --db-url-properties Env: KC_DB_URL_PROPERTIES db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME 19.3. Transaction Value transaction-xa-enabled 🛠 If set to false, Keycloak uses a non-XA datasource in case the database does not support XA transactions. CLI: --transaction-xa-enabled Env: KC_TRANSACTION_XA_ENABLED true (default), false 19.4. Feature Value features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api , account2 , account3 , admin-api , admin-fine-grained-authz , admin2 , authorization , ciba , client-policies , client-secret-rotation , declarative-user-profile , docker , dynamic-scopes , fips , impersonation , js-adapter , kerberos , linkedin-oauth , map-storage , multi-site , par , preview , recovery-codes , scripts , step-up-authentication , token-exchange , update-email , web-authn features-disabled 🛠 Disables a set of one or more features. CLI: --features-disabled Env: KC_FEATURES_DISABLED account-api , account2 , account3 , admin-api , admin-fine-grained-authz , admin2 , authorization , ciba , client-policies , client-secret-rotation , declarative-user-profile , docker , dynamic-scopes , fips , impersonation , js-adapter , kerberos , linkedin-oauth , map-storage , multi-site , par , preview , recovery-codes , scripts , step-up-authentication , token-exchange , update-email , web-authn 19.5. Hostname Value hostname Hostname for the Keycloak server. CLI: --hostname Env: KC_HOSTNAME hostname-admin The hostname for accessing the administration console. Use this option if you are exposing the administration console using a hostname other than the value set to the hostname option. CLI: --hostname-admin Env: KC_HOSTNAME_ADMIN hostname-admin-url Set the base URL for accessing the administration console, including scheme, host, port and path CLI: --hostname-admin-url Env: KC_HOSTNAME_ADMIN_URL hostname-debug Toggle the hostname debug page that is accessible at /realms/master/hostname-debug CLI: --hostname-debug Env: KC_HOSTNAME_DEBUG true , false (default) hostname-path This should be set if proxy uses a different context-path for Keycloak. CLI: --hostname-path Env: KC_HOSTNAME_PATH hostname-port The port used by the proxy when exposing the hostname. Set this option if the proxy uses a port other than the default HTTP and HTTPS ports. CLI: --hostname-port Env: KC_HOSTNAME_PORT -1 (default) hostname-strict Disables dynamically resolving the hostname from request headers. Should always be set to true in production, unless proxy verifies the Host header. CLI: --hostname-strict Env: KC_HOSTNAME_STRICT true (default), false hostname-strict-backchannel By default backchannel URLs are dynamically resolved from request headers to allow internal and external applications. If all applications use the public URL this option should be enabled. CLI: --hostname-strict-backchannel Env: KC_HOSTNAME_STRICT_BACKCHANNEL true , false (default) hostname-url Set the base URL for frontend URLs, including scheme, host, port and path. CLI: --hostname-url Env: KC_HOSTNAME_URL 19.6. HTTP/TLS Value http-enabled Enables the HTTP listener. CLI: --http-enabled Env: KC_HTTP_ENABLED true , false (default) http-host The used HTTP Host. CLI: --http-host Env: KC_HTTP_HOST 0.0.0.0 (default) http-port The used HTTP port. CLI: --http-port Env: KC_HTTP_PORT 8080 (default) http-relative-path 🛠 Set the path relative to / for serving resources. The path must start with a / . CLI: --http-relative-path Env: KC_HTTP_RELATIVE_PATH / (default) https-certificate-file The file path to a server certificate or certificate chain in PEM format. CLI: --https-certificate-file Env: KC_HTTPS_CERTIFICATE_FILE https-certificate-key-file The file path to a private key in PEM format. CLI: --https-certificate-key-file Env: KC_HTTPS_CERTIFICATE_KEY_FILE https-cipher-suites The cipher suites to use. If none is given, a reasonable default is selected. CLI: --https-cipher-suites Env: KC_HTTPS_CIPHER_SUITES https-client-auth Configures the server to require/request client authentication. CLI: --https-client-auth Env: KC_HTTPS_CLIENT_AUTH none (default), request , required https-key-store-file The key store which holds the certificate information instead of specifying separate files. CLI: --https-key-store-file Env: KC_HTTPS_KEY_STORE_FILE https-key-store-password The password of the key store file. CLI: --https-key-store-password Env: KC_HTTPS_KEY_STORE_PASSWORD password (default) https-key-store-type The type of the key store file. If not given, the type is automatically detected based on the file name. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-key-store-type Env: KC_HTTPS_KEY_STORE_TYPE https-port The used HTTPS port. CLI: --https-port Env: KC_HTTPS_PORT 8443 (default) https-protocols The list of protocols to explicitly enable. CLI: --https-protocols Env: KC_HTTPS_PROTOCOLS TLSv1.3,TLSv1.2 (default) https-trust-store-file The trust store which holds the certificate information of the certificates to trust. CLI: --https-trust-store-file Env: KC_HTTPS_TRUST_STORE_FILE https-trust-store-password The password of the trust store file. CLI: --https-trust-store-password Env: KC_HTTPS_TRUST_STORE_PASSWORD https-trust-store-type The type of the trust store file. If not given, the type is automatically detected based on the file name. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-trust-store-type Env: KC_HTTPS_TRUST_STORE_TYPE 19.7. Health Value health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default) 19.8. Config Value config-keystore Specifies a path to the KeyStore Configuration Source. CLI: --config-keystore Env: KC_CONFIG_KEYSTORE config-keystore-password Specifies a password to the KeyStore Configuration Source. CLI: --config-keystore-password Env: KC_CONFIG_KEYSTORE_PASSWORD config-keystore-type Specifies a type of the KeyStore Configuration Source. CLI: --config-keystore-type Env: KC_CONFIG_KEYSTORE_TYPE PKCS12 (default) 19.9. Metrics Value metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default) 19.10. Proxy Value proxy The proxy address forwarding mode if the server is behind a reverse proxy. CLI: --proxy Env: KC_PROXY none (default), edge , reencrypt , passthrough 19.11. Vault Value vault 🛠 Enables a vault provider. CLI: --vault Env: KC_VAULT file , keystore vault-dir If set, secrets can be obtained by reading the content of files within the given directory. CLI: --vault-dir Env: KC_VAULT_DIR vault-file Path to the keystore file. CLI: --vault-file Env: KC_VAULT_FILE vault-pass Password for the vault keystore. CLI: --vault-pass Env: KC_VAULT_PASS vault-type Specifies the type of the keystore file. CLI: --vault-type Env: KC_VAULT_TYPE PKCS12 (default) 19.12. Logging Value log Enable one or more log handlers in a comma-separated list. CLI: --log Env: KC_LOG console (default), file log-console-color Enable or disable colors when logging to console. CLI: --log-console-color Env: KC_LOG_CONSOLE_COLOR true , false (default) log-console-format The format of unstructured console log entries. If the format has spaces in it, escape the value using "<format>". CLI: --log-console-format Env: KC_LOG_CONSOLE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-console-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-console-output Env: KC_LOG_CONSOLE_OUTPUT default (default), json log-file Set the log file path and filename. CLI: --log-file Env: KC_LOG_FILE data/log/keycloak.log (default) log-file-format Set a format specific to file log entries. CLI: --log-file-format Env: KC_LOG_FILE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-file-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-file-output Env: KC_LOG_FILE_OUTPUT default (default), json log-level The log level of the root category or a comma-separated list of individual categories and their levels. For the root category, you don't need to specify a category. CLI: --log-level Env: KC_LOG_LEVEL info (default) 19.13. Security Value fips-mode 🛠 Sets the FIPS mode. If non-strict is set, FIPS is enabled but on non-approved mode. For full FIPS compliance, set strict to run on approved mode. This option defaults to disabled when fips feature is disabled, which is by default. This option defaults to non-strict when fips feature is enabled. CLI: --fips-mode Env: KC_FIPS_MODE non-strict , strict 19.14. Export Value dir Set the path to a directory where files will be created with the exported data. CLI: --dir Env: KC_DIR realm Set the name of the realm to export. If not set, all realms are going to be exported. CLI: --realm Env: KC_REALM users Set how users should be exported. CLI: --users Env: KC_USERS skip , realm_file , same_file , different_files (default) users-per-file Set the number of users per file. It is used only if users is set to different_files . Increasing this number leads to exponentially increasing export times. CLI: --users-per-file Env: KC_USERS_PER_FILE 50 (default) 19.15. Import Value file Set the path to a file that will be read. CLI: --file Env: KC_FILE override Set if existing data should be overwritten. If set to false, data will be ignored. CLI: --override Env: KC_OVERRIDE true (default), false | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/all-config- |
Preface | Preface Insights image builder provides the controls and information to keep your systems secure, available and operating efficiently. Update all your systems with secure, over the air updates. Organize your systems in groups that match your business and send updates that match your workflows. Use Red Hat Insights to find and fix potential vulnerabilities in your edge systems with one click. With the Insights image builder application, you can create an image and manage the packages associated with an image. You can build an image, download it, install it on a system, and then register that system so it can receive updates. Provisioning and registration involve the following high-level tasks: Build a Red Hat Enterprise Linux for Edge image using the Insights image builder application. Download the image and modify it with your organization credentials, using Podman and the fleet management ISO utility. Deploy the image to systems. View systems in the edge management and Red Hat Insights applications, in the Red Hat Hybrid Cloud Console. | null | https://docs.redhat.com/en/documentation/edge_management/1-latest/html/create_rhel_for_edge_images_and_configure_automated_management/pr01 |
Chapter 1. Release notes | Chapter 1. Release notes Red Hat CodeReady Workspaces is a web-based integrated development environment (IDE). CodeReady Workspaces runs in OpenShift and is well-suited for container-based development. This section documents the most important features and bug fixes in Red Hat CodeReady Workspaces. For the list of CodeReady Workspaces 2.1 release issues, see the Chapter 2, Known issues section. CodeReady Workspaces 2.1.1 with several bugfixes has been released. To upgrade, follow the instructions in the Upgrading CodeReady Workspaces chapter of the Installation Guide. TLS (https) now enabled by default; adjust old CodeReady Workspaces instance manually In CodeReady Workspaces 2.0, the tlsSupport parameter was set to false by default, allowing the use of insecure HTTP. In CodeReady Workspaces 2.1, secure HTTPS is required to use the most recent Theia IDE, and is therefore enabled by default. Existing workspaces from 2.0 need to be adjusted to use tlsSupport: true : USD oc patch -n <codereadyNamespace> checluster/codeready-workspaces \ --patch "{\"spec\":{\"server\":{\"tlsSupport\": true}}}" --type=merge Note This issue is fixed by updating to CodeReady Workspaces version 2.1.1. 1.1. About Red Hat CodeReady Workspaces Red Hat CodeReady Workspaces 2.1 provides an enterprise-level cloud developer workspace server and browser-based integrated development environment (IDE). CodeReady Workspaces includes ready-to-use developer stacks for most of the popular programming languages, frameworks, and Red Hat technologies. This minor release of Red Hat CodeReady Workspaces is based on Eclipse Che 7.9 and offers a number of enhancements and new features, including: Support for OpenShift Dedicated 4.3 Using the CodeReady Workspaces Operator and crwctl, CodeReady Workspaces can be installed on OpenShift Dedicated versions 3.11 and 4.3. Users can then benefit from all Red Hat-managed container application platform features, including: A flexible application environment. The ability to connect and extend local services. An isolated platform, which improves security and reduces downtime. See Supported platforms for deploying Red Hat CodeReady Workspaces for a table of supported platforms and installation methods. New onboarding flow from the dashboard The Getting Started with CodeReady Workspaces page lets users start workspaces by clicking a single button and without having to configure anything. Users see the Getting Started page by default when there is no workspace created. Temporary storage options for workspace creation The dashboard propose options to enable or disable the temporary storage and displays the kubernetes namespace used when creating a workspace. Quarkus implementation Quarkus support for workspaces is now available. By default, the create, read, update, and delete (CRUD) services are provided. The workspace contains ready-to-use commands to start developer mode, including debugging capabilities. The workspace also contains the Red Hat VS Code Quarkus plug-in to generate a new project or provide snippets using the Command palette. CodeReady Workspaces Air Gap with OpenShift 4.3 On OpenShift 4.3, it is now possible to follow Configuring OperatorHub for restricted networks using OpenShift Container Platform and configure the CodeReady Workspaces Operator to be used this way. Languages updates Provided versions: .NET 3.1 - Update from version 2.1 Java 11 - Extending the current use of Java 8 Support added Apache Camel K, a lightweight integration framework for serverless and microservice architectures. Gradle, a build automation tool, often used for JVM languages such as Java, Groovy, or Scala. New chapters in the documentation Using artifact repositories in a restricted environment Backup and Disaster Recovery Configuring system properties for CodeReady Workspaces OpenShift Connector, the VS Code extension CodeReady Workspaces 2.1 is available in the Red Hat Container Catalog . Install it on OpenShift Container Platform, starting at version 3.11, by following the instructions in the Installing CodeReady Workspaces chapter of the Installation Guide. From OpenShift 4.1, CodeReady Workspaces 2.1 is available from the OperatorHub. Based on a new Operator that uses the Operator Lifecycle Manager, the installation flow is simpler and can be handled without leaving the OpenShift Console. For OpenShift 4.3, get CodeReady Workspaces from the OperatorHub and follow the Installing CodeReady Workspaces on OpenShift 4 from OperatorHub chapter of the Installation Guide. 1.2. Notable enhancements 1.2.1. Support of self-signed certificates and SSH key pairs, Git flow improvements git clone command is now supported for repositories with self-signed SSL certificates, allowing customers behind a firewall (air gap) to connect to company internal Git repositories and clone from them. SSH key upload in CodeReady Workspaces using the command palette is now allowed. Users are now able to configure Git credentials while doing a commit without those parameters already set. The user's SSH keys are mounted automatically on workspace start into a single Kubernetes secret. Users can list their SSH keys using the SSH: view public key command and check if the public part is uploaded to the Git server by performing a private remote Git operation, such as git clone or git push . 1.2.2. Devfile Monaco is now the main devfile editor, replacing Codemirror. This provides YAML highlighting and validation, which makes it easier to customize devfiles manually. The Che devfile editor now supports auto-completion, validation, and hover. Added the ability to create a workspace from the dashboard with a devfile. A preview URL is now offered when a task is started, and support for such URLs is added to devfiles. Environment variables for plug-ins, editors, and other OpenShift or Kubernetes components are supported. This allows plug-in developers to set default values that are usable in most cases and can be overwritten later in a devfile if needed. Adds the ability for CodeReady Workspaces factories to override devfile properties. Adds the ability for the devfile to be set to the current project opened in CodeReady Workspaces by default, if not specified in the project section of the devfile. Also, launching a workspace using the factory stored within a Git repository or a Git branch leads to project creation. The project name then matches the name of the repository or the branch. Adds the ability for the CodeReady Workspaces factory loader to read a devfile from a GitHub repository URL that ends in .git . 1.2.3. Other enhancements 1.2.3.1. Termination of a running task from the IDE Adds the ability to stop a running task by sending Ctrl + c to the corresponding terminal or by using the Command palette Terminate Task action. 1.2.3.2. Access to the workspace container logs is now persistent All of the workspace lifetime logs are now persistent and accessible to the end-user using the crwctl and dashboard. 1.2.3.3. The URL in the preview panel is now editable The preview panel text-field that displays the currently opened URL is now editable and allows the user to navigate to the different components of an application, such as endpoints and pages. 1.2.3.4. Monitoring and Tracing for multiple Threads pools of CodeReady Workspaces server Users can now monitor and trace multiple thread pools of CodeReady Workspaces server, the proper adjustment of which can lead to better handling of workspace start load. 1.2.3.5. Opening terminal in a specific container The command pallet offers actions related to specific containers. 1.2.3.6. Offline devfile creation Air gap mode allows offline devfile registry to include sample projects and images. 1.2.3.7. Operators use a digest for containerImage reference instead of a tag In CodeReady Workspaces, registries and Operator metadata now use specific SHA256 image digests instead of mutable image tags like :2.0 . This provides a number of benefits, including better security and support for the new approach to restricted environments in Openshift 4.3. A complete list of images that the Operator could use, including all the stack and sidecar images, is now included in the relatedImages section of the Operator metadata clusterserviceversion (CSV) YAML file. 1.2.3.8. Plug-in broker refactoring The plug-in installation process was split into separate phases to reduce the time needed to process plug-ins and to enable plug-ins to be cached between workspace launches. 1.2.3.9. Ability to add a plug-in via a URL that is not in a plug-in registry Improvements in the plug-in broker and plug-in resolution code allow adding a set of default plug-ins, without including them in the plug-in registry. Users can configure the CodeReady Workspaces server with a list of URLs to plug-in meta.yaml files. 1.2.3.10. CodeReady Workspaces command line tool (crwctl) improvements User can use the watch utility to monitor a newly created Pod. The crwctl inject command is now supported in OpenShift command line interface (CLI). 1.2.3.11. Added codeready-workspaces command syntax auto-completion to the Task editor Added support for CRW specific autocompletion and hinting when editing tasks in a CRW workspace. The editor now includes CRW-specific fields. For example, which container to use, and assisting the user while adding a task to Che-Theia. 1.2.3.12. Workspaces are created in a namespace based on the username value In CRW 2.0, workspaces were created in a dedicated namespace named with the workspace ID by default. Starting with CRW 2.1, by default, all workspaces of a given user are created in a single namespace named <username> -crw . 1.2.3.13. Internal editor improvements File tree empty-space reduction VS Code API compatibility extension More plug-ins supported 1.3. Supported platforms 1.3.1. Supported platforms and installation methods The following section provides information about the availability of CodeReady Workspaces 2.1 on OpenShift Container Platform, OpenShift Dedicated, and about their supported installation methods. Red Hat CodeReady Workspaces can be installed on OpenShift Container Platform and OpenShift Dedicated starting at version 3.11. Table 1.1. Availability of CodeReady Workspaces 2.1 on OpenShift Container Platform and OpenShift Dedicated 3.11 4.3 4.4 4.5 OpenShift Container Platform ✔ ✔ ✔ Technical Preview OpenShift Dedicated ✔ ✔ Technical Preview Technical Preview Table 1.2. Supported installation method for CodeReady Workspaces 2.1 on OpenShift Container Platform and OpenShift Dedicated 3.11 4.3 4.4 OpenShift Container Platform crwctl OperatorHub OperatorHub OpenShift Dedicated crwctl OperatorHub N/A It is possible to use the crwctl utility script for deploying CodeReady Workspaces 2.1 on OpenShift Container Platform versions 4.3, 4.4, and OpenShift Dedicated version 4.3. This method is considered unofficial and serves as a backup installation method for situations where the installation method using OperatorHub is not available. 1.3.2. Installing and deploying CodeReady Workspaces For OpenShift 3.11, see the Installing CodeReady Workspaces chapter of the Administrator Guide. For OpenShift 4.4, see the Installing CodeReady Workspaces from Operator Hub chapter of the Installation Guide. 1.3.3. Support policy For Red Hat CodeReady Workspaces 2.1, Red Hat will provide support for deployment, configuration, and use of the product. CodeReady Workspaces 2.1 has been tested on Chrome version 83.0.4103.97 (Official Build) (64-bit). For more information, see CodeReady Workspaces life-cycle and support policy . 1.4. Difference between Eclipse Che and Red Hat CodeReady Workspaces The main difference between CodeReady Workspaces and Eclipse Che is that CodeReady Workspaces is supported by Red Hat. There is no difference in the technologies these two products use. Nevertheless, CodeReady Workspaces runs the plug-ins and devfiles from supported images. Licensing, packaging, and support are also provided by Red Hat. The following table lists the differences between Eclipse Che and Red Hat CodeReady Workspaces: CodeReady Workspaces Eclipse Che The CodeReady Workspaces stacks are based on Red Hat Enterprise Linux. The CodeReady Workspaces stacks list includes several stack images based on Red Hat Enterprise Application Platform, such as Vert.x, Springboot, etc. The Eclipse Che stacks are based on CentOS and other free operating systems | [
"oc patch -n <codereadyNamespace> checluster/codeready-workspaces --patch \"{\\\"spec\\\":{\\\"server\\\":{\\\"tlsSupport\\\": true}}}\" --type=merge"
]
| https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/release_notes_and_known_issues/release-notes |
Release notes for Red Hat build of OpenJDK 11.0.16 | Release notes for Red Hat build of OpenJDK 11.0.16 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.16/index |
Chapter 3. The LLDB debugger | Chapter 3. The LLDB debugger The LLDB debugger is a command-line tool for debugging C and C++ programs. Use LLDB to inspect memory within the code being debugged, control the execution state of the code, and detect the execution of particular sections of code. LLVM Toolset is distributed with LLDB 17.0.6. 3.1. Prerequisites LLVM Toolset is installed. For more information, see Installing LLVM Toolset . Your compiler is configured to create debug information. For instructions on configuring the Clang compiler, see Controlling Debug Information in the Clang Compiler User's Manual. For instructions on configuring the GCC compiler, see Preparing a Program for Debugging in the Red Hat Developer Toolset User Guide. 3.2. Starting a debugging session Use LLDB to start an interactive debugging session. Procedure To run LLDB on a program you want to debug, use the following command: On Red Hat Enterprise Linux 8: Replace < binary_file > with the name of your compiled program. You have started your LLDB debugging session in interactive mode. Your command-line terminal now displays the default prompt (lldb) . On Red Hat Enterprise Linux 9: Replace < binary_file > with the name of your compiled program. You have started your LLDB debugging session in interactive mode. Your command-line terminal now displays the default prompt (lldb) . To quit the debugging session and return to the shell prompt, run the following command: 3.3. Executing your program during a debugging session Use LLDB to execute your program during your debugging session. The execution of your program stops when the first breakpoint is reached, when an error occurs, or when the program terminates. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To execute the program you are debugging, run: To execute the program you are debugging using a specific argument, run: Replace < argument > with the command-line argument you want to use. 3.4. Using breakpoints Use breakpoints to pause the execution of your program at a set point in your source code. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To set a new breakpoint on a specific line, run the following command: Replace < source_file_name > with the name of your source file and < line_number > with the line number you want to set your breakpoint at. To set a breakpoint on a specific function, run the following command: Replace < function_name > with the name of the function you want to set your breakpoint at. To display a list of currently set breakpoints, run the following command: To delete a breakpoint, run: Replace < source_file_name > with the name of your source file and < line_number > with line number of the breakpoint you want to delete. To resume the execution of your program after it reached a breakpoint, run: To skip a specific number of breakpoints, run the following command: Replace < breakpoints_to_skip > with the number of breakpoints you want to skip. Note To skip a loop, set the < breakpoints_to_skip > to match the loop iteration count. 3.5. Stepping through code You can use LLDB to step through the code of your program to execute only one line of code after the line pointer. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To step through one line of code: Set your line pointer to the line you want to execute. Run the following command: To step through a specific number of lines of code: Set your line pointer to the line you want to execute. Run the following command: Replace < number > with the number of lines you want to execute. 3.6. Listing source code Before you execute the program you are debugging, the LLDB debugger automatically displays the first 10 lines of source code. Each time the execution of the program is stopped, LLDB displays the line of source code on which it stopped as well as its surrounding lines. You can use LLDB to manually trigger the display of source code during your debugging session. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To list the first 10 lines of the source code of the program you are debugging, run: To display the source code from a specific line, run: Replace < source_file_name > with the name of your source file and < line_number > with the number of the line you want to display. 3.7. Displaying current program data The LLDB debugger provides data on variables of any complexity, any valid expressions, and function call return values. You can use LLDB to display data relevant to the program state. Prerequisites You have started an interactive debugging session. For more information, see Starting a debugging session with LLDB . Procedure To display the current value of a certain variable, expression, or return value, run: Replace < data_name > with data you want to display. 3.8. Additional resources For more information on the LLDB debugger, see the official LLDB documentation LLDB Tutorial . For a list of GDB commands and their LLDB equivalents, see the GDB to LLDB Command Map . | [
"lldb < binary_file_name >",
"lldb < binary_file >",
"(lldb) quit",
"(lldb) run",
"(lldb) run < argument >",
"(lldb) breakpoint set --file < source_file_name> --line < line_number >",
"(lldb) breakpoint set --name < function_name >",
"(lldb) breakpoint list",
"(lldb) breakpoint clear -f < source_file_name > -l < line_number >",
"(lldb) continue",
"(lldb) continue -i < breakpoints_to_skip >",
"(lldb) step",
"(lldb) step -c < number >",
"(lldb) list",
"(lldb) list < source_file_name >:< line_number >",
"(lldb) print < data_name >"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_17.0.6_toolset/assembly_the-lldb-debugger_using-llvm-toolset |
Chapter 4. API object reference | Chapter 4. API object reference 4.1. Common object reference 4.1.1. io.k8s.api.admissionregistration.v1.MutatingWebhookConfigurationList schema Description MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MutatingWebhookConfiguration) List of MutatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.2. io.k8s.api.admissionregistration.v1.ValidatingWebhookConfigurationList schema Description ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingWebhookConfiguration) List of ValidatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.3. io.k8s.api.apps.v1.ControllerRevisionList schema Description ControllerRevisionList is a resource containing a list of ControllerRevision objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerRevision) Items is the list of ControllerRevisions kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.4. io.k8s.api.apps.v1.DaemonSetList schema Description DaemonSetList is a collection of daemon sets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DaemonSet) A list of daemon sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.5. io.k8s.api.apps.v1.DeploymentList schema Description DeploymentList is a list of Deployments. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Deployment) Items is the list of Deployments. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 4.1.6. io.k8s.api.apps.v1.ReplicaSetList schema Description ReplicaSetList is a collection of ReplicaSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicaSet) List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.7. io.k8s.api.apps.v1.StatefulSetList schema Description StatefulSetList is a collection of StatefulSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StatefulSet) Items is the list of stateful sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.8. io.k8s.api.autoscaling.v2.HorizontalPodAutoscalerList schema Description HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HorizontalPodAutoscaler) items is the list of horizontal pod autoscaler objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. 4.1.9. io.k8s.api.batch.v1.CronJobList schema Description CronJobList is a collection of cron jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CronJob) items is the list of CronJobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.10. io.k8s.api.batch.v1.JobList schema Description JobList is a collection of jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Job) items is the list of Jobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.11. io.k8s.api.certificates.v1.CertificateSigningRequestList schema Description CertificateSigningRequestList is a collection of CertificateSigningRequest objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CertificateSigningRequest) items is a collection of CertificateSigningRequest objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 4.1.12. io.k8s.api.coordination.v1.LeaseList schema Description LeaseList is a list of Lease objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Lease) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.13. io.k8s.api.core.v1.Affinity schema Description Affinity is a group of affinity scheduling rules. Type object Schema Property Type Description nodeAffinity NodeAffinity Describes node affinity scheduling rules for the pod. podAffinity PodAffinity Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity PodAntiAffinity Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 4.1.14. io.k8s.api.core.v1.AWSElasticBlockStoreVolumeSource schema Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Schema Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 4.1.15. io.k8s.api.core.v1.AzureDiskVolumeSource schema Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Schema Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 4.1.16. io.k8s.api.core.v1.AzureFilePersistentVolumeSource schema Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Schema Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key secretNamespace string secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod shareName string shareName is the azure Share Name 4.1.17. io.k8s.api.core.v1.AzureFileVolumeSource schema Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Schema Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 4.1.18. io.k8s.api.core.v1.Capabilities schema Description Adds and removes POSIX capabilities from running containers. Type object Schema Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 4.1.19. io.k8s.api.core.v1.CephFSPersistentVolumeSource schema Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Schema Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef SecretReference secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 4.1.20. io.k8s.api.core.v1.CephFSVolumeSource schema Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Schema Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef LocalObjectReference secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 4.1.21. io.k8s.api.core.v1.CinderPersistentVolumeSource schema Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Schema Property Type Description fsType string fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef SecretReference secretRef is Optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 4.1.22. io.k8s.api.core.v1.CinderVolumeSource schema Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef LocalObjectReference secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 4.1.23. io.k8s.api.core.v1.ClaimSource schema Description ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. Type object Schema Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 4.1.24. io.k8s.api.core.v1.ComponentStatusList schema Description Status of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+ Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ComponentStatus) List of ComponentStatus objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.25. io.k8s.api.core.v1.ConfigMapEnvSource schema Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 4.1.26. io.k8s.api.core.v1.ConfigMapKeySelector schema Description Selects a key from a ConfigMap. Type object Required key Schema Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 4.1.27. io.k8s.api.core.v1.ConfigMapList schema Description ConfigMapList is a resource containing a list of ConfigMap objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConfigMap) Items is the list of ConfigMaps. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.28. io.k8s.api.core.v1.ConfigMapProjection schema Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Schema Property Type Description items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 4.1.29. io.k8s.api.core.v1.ConfigMapVolumeSource schema Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 4.1.30. io.k8s.api.core.v1.Container schema Description A single application container that you want to run within a pod. Type object Required name Schema Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array (EnvVar) List of environment variables to set in the container. Cannot be updated. envFrom array (EnvFromSource) List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array (ContainerPort) List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array (ContainerResizePolicy) Resources resize policy for the container. resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array (VolumeDevice) volumeDevices is the list of block devices to be used by the container. volumeMounts array (VolumeMount) Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 4.1.31. io.k8s.api.core.v1.ContainerPort schema Description ContainerPort represents a network port in a single container. Type object Required containerPort Schema Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 4.1.32. io.k8s.api.core.v1.ContainerResizePolicy schema Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Schema Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 4.1.33. io.k8s.api.core.v1.CSIPersistentVolumeSource schema Description Represents storage that is managed by an external CSI volume driver (Beta feature) Type object Required driver volumeHandle Schema Property Type Description controllerExpandSecretRef SecretReference controllerExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerExpandVolume call. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. controllerPublishSecretRef SecretReference controllerPublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. driver string driver is the name of the driver to use for this volume. Required. fsType string fsType to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". nodeExpandSecretRef SecretReference nodeExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeExpandVolume call. This is a beta field which is enabled default by CSINodeExpandSecret feature gate. This field is optional, may be omitted if no secret is required. If the secret object contains more than one secret, all secrets are passed. nodePublishSecretRef SecretReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. nodeStageSecretRef SecretReference nodeStageSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume and NodeStageVolume and NodeUnstageVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. readOnly boolean readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes of the volume to publish. volumeHandle string volumeHandle is the unique volume name returned by the CSI volume plugin's CreateVolume to refer to the volume on all subsequent calls. Required. 4.1.34. io.k8s.api.core.v1.CSIVolumeSource schema Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Schema Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 4.1.35. io.k8s.api.core.v1.DownwardAPIProjection schema Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Schema Property Type Description items array (DownwardAPIVolumeFile) Items is a list of DownwardAPIVolume file 4.1.36. io.k8s.api.core.v1.DownwardAPIVolumeFile schema Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Schema Property Type Description fieldRef ObjectFieldSelector Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 4.1.37. io.k8s.api.core.v1.DownwardAPIVolumeSource schema Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (DownwardAPIVolumeFile) Items is a list of downward API volume file 4.1.38. io.k8s.api.core.v1.EmptyDirVolumeSource schema Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 4.1.39. io.k8s.api.core.v1.EndpointsList schema Description EndpointsList is a list of endpoints. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Endpoints) List of endpoints. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.40. io.k8s.api.core.v1.EnvFromSource schema Description EnvFromSource represents the source of a set of ConfigMaps Type object Schema Property Type Description configMapRef ConfigMapEnvSource The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef SecretEnvSource The Secret to select from 4.1.41. io.k8s.api.core.v1.EnvVar schema Description EnvVar represents an environment variable present in a Container. Type object Required name Schema Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. 4.1.42. io.k8s.api.core.v1.EnvVarSource schema Description EnvVarSource represents a source for the value of an EnvVar. Type object Schema Property Type Description configMapKeyRef ConfigMapKeySelector Selects a key of a ConfigMap. fieldRef ObjectFieldSelector Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef SecretKeySelector Selects a key of a secret in the pod's namespace 4.1.43. io.k8s.api.core.v1.EphemeralContainer schema Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Schema Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array (EnvVar) List of environment variables to set in the container. Cannot be updated. envFrom array (EnvFromSource) List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Lifecycle is not allowed for ephemeral containers. livenessProbe Probe Probes are not allowed for ephemeral containers. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array (ContainerPort) Ports are not allowed for ephemeral containers. readinessProbe Probe Probes are not allowed for ephemeral containers. resizePolicy array (ContainerResizePolicy) Resources resize policy for the container. resources ResourceRequirements Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. securityContext SecurityContext Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. startupProbe Probe Probes are not allowed for ephemeral containers. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array (VolumeDevice) volumeDevices is the list of block devices to be used by the container. volumeMounts array (VolumeMount) Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 4.1.44. io.k8s.api.core.v1.EphemeralVolumeSource schema Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Schema Property Type Description volumeClaimTemplate PersistentVolumeClaimTemplate Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 4.1.45. io.k8s.api.core.v1.EventList schema Description EventList is a list of events. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) List of events kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.46. io.k8s.api.core.v1.EventSource schema Description EventSource contains information for an event. Type object Schema Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 4.1.47. io.k8s.api.core.v1.ExecAction schema Description ExecAction describes a "run in container" action. Type object Schema Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 4.1.48. io.k8s.api.core.v1.FCVolumeSource schema Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 4.1.49. io.k8s.api.core.v1.FlexPersistentVolumeSource schema Description FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Schema Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef SecretReference secretRef is Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 4.1.50. io.k8s.api.core.v1.FlexVolumeSource schema Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Schema Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 4.1.51. io.k8s.api.core.v1.FlockerVolumeSource schema Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Schema Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 4.1.52. io.k8s.api.core.v1.GCEPersistentDiskVolumeSource schema Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Schema Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 4.1.53. io.k8s.api.core.v1.GitRepoVolumeSource schema Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Schema Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 4.1.54. io.k8s.api.core.v1.GlusterfsPersistentVolumeSource schema Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Schema Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod endpointsNamespace string endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 4.1.55. io.k8s.api.core.v1.GlusterfsVolumeSource schema Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Schema Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 4.1.56. io.k8s.api.core.v1.GRPCAction schema Description Type object Required port Schema Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 4.1.57. io.k8s.api.core.v1.HostAlias schema Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Schema Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 4.1.58. io.k8s.api.core.v1.HostPathVolumeSource schema Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Schema Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 4.1.59. io.k8s.api.core.v1.HTTPGetAction schema Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Schema Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array (HTTPHeader) Custom headers to set in the request. HTTP allows repeated headers. path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 4.1.60. io.k8s.api.core.v1.HTTPHeader schema Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Schema Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 4.1.61. io.k8s.api.core.v1.ISCSIPersistentVolumeSource schema Description ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Schema Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is Target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun is iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef SecretReference secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 4.1.62. io.k8s.api.core.v1.ISCSIVolumeSource schema Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Schema Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef LocalObjectReference secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 4.1.63. io.k8s.api.core.v1.KeyToPath schema Description Maps a string key to a path within a volume. Type object Required key path Schema Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 4.1.64. io.k8s.api.core.v1.Lifecycle schema Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Schema Property Type Description postStart LifecycleHandler PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop LifecycleHandler PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 4.1.65. io.k8s.api.core.v1.LifecycleHandler schema Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Schema Property Type Description exec ExecAction Exec specifies the action to take. httpGet HTTPGetAction HTTPGet specifies the http request to perform. tcpSocket TCPSocketAction Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 4.1.66. io.k8s.api.core.v1.LimitRangeList schema Description LimitRangeList is a list of LimitRange items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LimitRange) Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.67. io.k8s.api.core.v1.LocalObjectReference schema Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 4.1.68. io.k8s.api.core.v1.LocalVolumeSource schema Description Local represents directly-attached storage with node affinity (Beta feature) Type object Required path Schema Property Type Description fsType string fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a filesystem if unspecified. path string path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ... ). 4.1.69. io.k8s.api.core.v1.NamespaceList schema Description NamespaceList is a list of Namespaces. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Namespace) Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.70. io.k8s.api.core.v1.NFSVolumeSource schema Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Schema Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 4.1.71. io.k8s.api.core.v1.NodeAffinity schema Description Node affinity is a group of node affinity scheduling rules. Type object Schema Property Type Description preferredDuringSchedulingIgnoredDuringExecution array (PreferredSchedulingTerm) The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution NodeSelector If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 4.1.72. io.k8s.api.core.v1.NodeList schema Description NodeList is the whole list of all Nodes which have been registered with master. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.73. io.k8s.api.core.v1.NodeSelector schema Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Schema Property Type Description nodeSelectorTerms array (NodeSelectorTerm) Required. A list of node selector terms. The terms are ORed. 4.1.74. io.k8s.api.core.v1.NodeSelectorRequirement schema Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Schema Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 4.1.75. io.k8s.api.core.v1.NodeSelectorTerm schema Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Schema Property Type Description matchExpressions array (NodeSelectorRequirement) A list of node selector requirements by node's labels. matchFields array (NodeSelectorRequirement) A list of node selector requirements by node's fields. 4.1.76. io.k8s.api.core.v1.ObjectFieldSelector schema Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Schema Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 4.1.77. io.k8s.api.core.v1.ObjectReference schema Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Schema Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 4.1.78. io.k8s.api.core.v1.PersistentVolumeClaim schema Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. ..spec Description:: + PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. ..spec.dataSource Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.dataSourceRef Description:: + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. ..spec.resources Description:: + ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ..spec.resources.claims Description:: + Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array ..spec.resources.claims[] Description:: + ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. ..status Description:: + PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources object (Quantity) allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. Possible enum values: - "" When expansion is complete, the empty string is set by resize controller or kubelet. - "ControllerExpansionFailed" State set when expansion has failed in resize controller with a terminal error. Transient errors such as timeout should not set this status and should leave ResizeStatus unmodified, so as resize controller can resume the volume expansion. - "ControllerExpansionInProgress" State set when resize controller starts expanding the volume in control-plane - "NodeExpansionFailed" State set when expansion has failed in kubelet with a terminal error. Transient errors don't set NodeExpansionFailed. - "NodeExpansionInProgress" State set when kubelet starts expanding the volume. - "NodeExpansionPending" State set when resize controller has finished expanding the volume but further expansion is needed on the node. ..status.conditions Description:: + conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array ..status.conditions[] Description:: + PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string 4.1.79. io.k8s.api.core.v1.PersistentVolumeClaimList schema Description PersistentVolumeClaimList is a list of PersistentVolumeClaim items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolumeClaim) items is a list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.80. io.k8s.api.core.v1.PersistentVolumeClaimSpec schema Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Schema Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource TypedLocalObjectReference dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef TypedObjectReference dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources ResourceRequirements resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 4.1.81. io.k8s.api.core.v1.PersistentVolumeClaimTemplate schema Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Schema Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec PersistentVolumeClaimSpec The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 4.1.82. io.k8s.api.core.v1.PersistentVolumeClaimVolumeSource schema Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Schema Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 4.1.83. io.k8s.api.core.v1.PersistentVolumeList schema Description PersistentVolumeList is a list of PersistentVolume items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolume) items is a list of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.84. io.k8s.api.core.v1.PersistentVolumeSpec schema Description PersistentVolumeSpec is the specification of a persistent volume. Type object Schema Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFilePersistentVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs CephFSPersistentVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderPersistentVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef ObjectReference claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi CSIPersistentVolumeSource csi represents storage that is handled by an external CSI driver (Beta feature). fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexPersistentVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs GlusterfsPersistentVolumeSource glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIPersistentVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local LocalVolumeSource local represents directly-attached storage with node affinity mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs NFSVolumeSource nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity VolumeNodeAffinity nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDPersistentVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOPersistentVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos StorageOSPersistentVolumeSource storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 4.1.85. io.k8s.api.core.v1.PhotonPersistentDiskVolumeSource schema Description Represents a Photon Controller persistent disk resource. Type object Required pdID Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 4.1.86. io.k8s.api.core.v1.PodAffinity schema Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Schema Property Type Description preferredDuringSchedulingIgnoredDuringExecution array (WeightedPodAffinityTerm) The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution array (PodAffinityTerm) If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. 4.1.87. io.k8s.api.core.v1.PodAffinityTerm schema Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Schema Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 4.1.88. io.k8s.api.core.v1.PodAntiAffinity schema Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Schema Property Type Description preferredDuringSchedulingIgnoredDuringExecution array (WeightedPodAffinityTerm) The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution array (PodAffinityTerm) If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. 4.1.89. io.k8s.api.core.v1.PodDNSConfig schema Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Schema Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array (PodDNSConfigOption) A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 4.1.90. io.k8s.api.core.v1.PodDNSConfigOption schema Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Schema Property Type Description name string Required. value string 4.1.91. io.k8s.api.core.v1.PodList schema Description PodList is a list of Pods. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Pod) List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.92. io.k8s.api.core.v1.PodOS schema Description PodOS defines the OS parameters of a pod. Type object Required name Schema Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 4.1.93. io.k8s.api.core.v1.PodReadinessGate schema Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Schema Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 4.1.94. io.k8s.api.core.v1.PodResourceClaim schema Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Schema Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source ClaimSource Source describes where to find the ResourceClaim. 4.1.95. io.k8s.api.core.v1.PodSchedulingGate schema Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Schema Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 4.1.96. io.k8s.api.core.v1.PodSecurityContext schema Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Schema Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Always" indicates that volume's ownership and permissions should always be changed whenever volume is mounted inside a Pod. This the default behavior. - "OnRootMismatch" indicates that volume's ownership and permissions will be changed only when permission and ownership of root directory does not match with expected permissions on the volume. This can help shorten the time it takes to change ownership and permissions of a volume. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array (Sysctl) Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 4.1.97. io.k8s.api.core.v1.PodSpec schema Description PodSpec is a description of a pod. Type object Required containers Schema Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity Affinity If specified, the pod's scheduling constraints automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array (Container) List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. dnsConfig PodDNSConfig Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array (EphemeralContainer) List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. hostAliases array (HostAlias) HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array (LocalObjectReference) ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod initContainers array (Container) List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os PodOS Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array (PodReadinessGate) If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates resourceClaims array (PodResourceClaim) ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array (PodSchedulingGate) SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array (Toleration) If specified, the pod's tolerations. topologySpreadConstraints array (TopologySpreadConstraint) TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. volumes array (Volume) List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes 4.1.98. io.k8s.api.core.v1.PodTemplateList schema Description PodTemplateList is a list of PodTemplates. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodTemplate) List of pod templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.99. io.k8s.api.core.v1.PodTemplateSpec schema Description PodTemplateSpec describes the data a pod should have when created from a template Type object Schema Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PodSpec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 4.1.100. io.k8s.api.core.v1.PortworxVolumeSource schema Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Schema Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 4.1.101. io.k8s.api.core.v1.PreferredSchedulingTerm schema Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Schema Property Type Description preference NodeSelectorTerm A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 4.1.102. io.k8s.api.core.v1.Probe schema Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Schema Property Type Description exec ExecAction Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc GRPCAction GRPC specifies an action involving a GRPC port. httpGet HTTPGetAction HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket TCPSocketAction TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 4.1.103. io.k8s.api.core.v1.ProjectedVolumeSource schema Description Represents a projected volume source Type object Schema Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array (VolumeProjection) sources is the list of volume projections 4.1.104. io.k8s.api.core.v1.QuobyteVolumeSource schema Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Schema Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 4.1.105. io.k8s.api.core.v1.RBDPersistentVolumeSource schema Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Schema Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef SecretReference secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 4.1.106. io.k8s.api.core.v1.RBDVolumeSource schema Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Schema Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef LocalObjectReference secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 4.1.107. io.k8s.api.core.v1.ReplicationControllerList schema Description ReplicationControllerList is a collection of replication controllers. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicationController) List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.108. io.k8s.api.core.v1.ResourceClaim schema Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Schema Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 4.1.109. io.k8s.api.core.v1.ResourceFieldSelector schema Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Schema Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 4.1.110. io.k8s.api.core.v1.ResourceQuotaList schema Description ResourceQuotaList is a list of ResourceQuota items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ResourceQuota) Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.111. io.k8s.api.core.v1.ResourceRequirements schema Description ResourceRequirements describes the compute resource requirements. Type object Schema Property Type Description claims array (ResourceClaim) Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 4.1.112. io.k8s.api.core.v1.ScaleIOPersistentVolumeSource schema Description ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs" gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef SecretReference secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled is the flag to enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 4.1.113. io.k8s.api.core.v1.ScaleIOVolumeSource schema Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 4.1.114. io.k8s.api.core.v1.SeccompProfile schema Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Schema Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 4.1.115. io.k8s.api.core.v1.SecretEnvSource schema Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 4.1.116. io.k8s.api.core.v1.SecretKeySelector schema Description SecretKeySelector selects a key of a Secret. Type object Required key Schema Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 4.1.117. io.k8s.api.core.v1.SecretList schema Description SecretList is a list of Secret. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.118. io.k8s.api.core.v1.SecretProjection schema Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Schema Property Type Description items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 4.1.119. io.k8s.api.core.v1.SecretReference schema Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Schema Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 4.1.120. io.k8s.api.core.v1.SecretVolumeSource schema Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 4.1.121. io.k8s.api.core.v1.SecurityContext schema Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Schema Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities Capabilities The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 4.1.122. io.k8s.api.core.v1.SELinuxOptions schema Description SELinuxOptions are the labels to be applied to the container Type object Schema Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 4.1.123. io.k8s.api.core.v1.ServiceAccountList schema Description ServiceAccountList is a list of ServiceAccount objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceAccount) List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.124. io.k8s.api.core.v1.ServiceAccountTokenProjection schema Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Schema Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 4.1.125. io.k8s.api.core.v1.ServiceList schema Description ServiceList holds a list of services. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Service) List of services kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.126. io.k8s.api.core.v1.StorageOSPersistentVolumeSource schema Description Represents a StorageOS persistent volume resource. Type object Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef ObjectReference secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 4.1.127. io.k8s.api.core.v1.StorageOSVolumeSource schema Description Represents a StorageOS persistent volume resource. Type object Schema Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 4.1.128. io.k8s.api.core.v1.Sysctl schema Description Sysctl defines a kernel parameter to be set Type object Required name value Schema Property Type Description name string Name of a property to set value string Value of a property to set 4.1.129. io.k8s.api.core.v1.TCPSocketAction schema Description TCPSocketAction describes an action based on opening a socket Type object Required port Schema Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 4.1.130. io.k8s.api.core.v1.Toleration schema Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Schema Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 4.1.131. io.k8s.api.core.v1.TopologySelectorLabelRequirement schema Description A topology selector requirement is a selector that matches given label. This is an alpha feature and may change in the future. Type object Required key values Schema Property Type Description key string The label key that the selector applies to. values array (string) An array of string values. One value must match the label to be selected. Each entry in Values is ORed. 4.1.132. io.k8s.api.core.v1.TopologySelectorTerm schema Description A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future. Type object Schema Property Type Description matchLabelExpressions array (TopologySelectorLabelRequirement) A list of topology selector requirements by labels. 4.1.133. io.k8s.api.core.v1.TopologySpreadConstraint schema Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Schema Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 4.1.134. io.k8s.api.core.v1.TypedLocalObjectReference schema Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 4.1.135. io.k8s.api.core.v1.TypedObjectReference schema Description Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 4.1.136. io.k8s.api.core.v1.Volume schema Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Schema Property Type Description awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFileVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs CephFSVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap ConfigMapVolumeSource configMap represents a configMap that should populate this volume csi CSIVolumeSource csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI DownwardAPIVolumeSource downwardAPI represents downward API about the pod that should populate this volume emptyDir EmptyDirVolumeSource emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral EphemeralVolumeSource ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo GitRepoVolumeSource gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs GlusterfsVolumeSource glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim PersistentVolumeClaimVolumeSource persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected ProjectedVolumeSource projected items for all in one resources secrets, configmaps, and downward API quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret SecretVolumeSource secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos StorageOSVolumeSource storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 4.1.137. io.k8s.api.core.v1.VolumeDevice schema Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Schema Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 4.1.138. io.k8s.api.core.v1.VolumeMount schema Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Schema Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 4.1.139. io.k8s.api.core.v1.VolumeNodeAffinity schema Description VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. Type object Schema Property Type Description required NodeSelector required specifies hard node constraints that must be met. 4.1.140. io.k8s.api.core.v1.VolumeProjection schema Description Projection that may be projected along with other supported volume types Type object Schema Property Type Description configMap ConfigMapProjection configMap information about the configMap data to project downwardAPI DownwardAPIProjection downwardAPI information about the downwardAPI data to project secret SecretProjection secret information about the secret data to project serviceAccountToken ServiceAccountTokenProjection serviceAccountToken is information about the serviceAccountToken data to project 4.1.141. io.k8s.api.core.v1.VsphereVirtualDiskVolumeSource schema Description Represents a vSphere volume resource. Type object Required volumePath Schema Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 4.1.142. io.k8s.api.core.v1.WeightedPodAffinityTerm schema Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Schema Property Type Description podAffinityTerm PodAffinityTerm Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 4.1.143. io.k8s.api.core.v1.WindowsSecurityContextOptions schema Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Schema Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 4.1.144. io.k8s.api.discovery.v1.EndpointSliceList schema Description EndpointSliceList represents a list of endpoint slices Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EndpointSlice) items is the list of endpoint slices kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 4.1.145. io.k8s.api.events.v1.EventList schema Description EventList is a list of Event objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.146. io.k8s.api.flowcontrol.v1beta3.FlowSchemaList schema Description FlowSchemaList is a list of FlowSchema objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FlowSchema) items is a list of FlowSchemas. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.147. io.k8s.api.flowcontrol.v1beta3.PriorityLevelConfigurationList schema Description PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityLevelConfiguration) items is a list of request-priorities. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.148. io.k8s.api.networking.v1.IngressClassList schema Description IngressClassList is a collection of IngressClasses. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressClass) items is the list of IngressClasses. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 4.1.149. io.k8s.api.networking.v1.IngressList schema Description IngressList is a collection of Ingress. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) items is the list of Ingress. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.150. io.k8s.api.networking.v1.NetworkPolicyList schema Description NetworkPolicyList is a list of NetworkPolicy objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkPolicy) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.151. io.k8s.api.node.v1.RuntimeClassList schema Description RuntimeClassList is a list of RuntimeClass objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RuntimeClass) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.152. io.k8s.api.policy.v1.PodDisruptionBudgetList schema Description PodDisruptionBudgetList is a collection of PodDisruptionBudgets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodDisruptionBudget) Items is a list of PodDisruptionBudgets kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.153. io.k8s.api.rbac.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 4.1.154. io.k8s.api.rbac.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 4.1.155. io.k8s.api.rbac.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 4.1.156. io.k8s.api.rbac.v1.RoleList schema Description RoleList is a collection of Roles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 4.1.157. io.k8s.api.scheduling.v1.PriorityClassList schema Description PriorityClassList is a collection of priority classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityClass) items is the list of PriorityClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.158. io.k8s.api.storage.v1.CSIDriverList schema Description CSIDriverList is a collection of CSIDriver objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIDriver) items is the list of CSIDriver kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.159. io.k8s.api.storage.v1.CSINodeList schema Description CSINodeList is a collection of CSINode objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSINode) items is the list of CSINode kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.160. io.k8s.api.storage.v1.CSIStorageCapacityList schema Description CSIStorageCapacityList is a collection of CSIStorageCapacity objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIStorageCapacity) items is the list of CSIStorageCapacity objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.161. io.k8s.api.storage.v1.StorageClassList schema Description StorageClassList is a collection of storage classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageClass) items is the list of StorageClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.162. io.k8s.api.storage.v1.VolumeAttachmentList schema Description VolumeAttachmentList is a collection of VolumeAttachment objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeAttachment) items is the list of VolumeAttachments kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.163. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionList schema Description CustomResourceDefinitionList is a list of CustomResourceDefinition objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CustomResourceDefinition) items list individual CustomResourceDefinition objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.164. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.ExternalDocumentation schema Description ExternalDocumentation allows referencing an external resource for extended documentation. Type object Schema Property Type Description description string url string 4.1.165. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSON schema Description JSON represents any valid JSON value. These types are supported: bool, int64, float64, string, []interface{}, map[string]interface{} and nil. Type `` 4.1.166. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps schema Description JSONSchemaProps is a JSON-Schema following Specification Draft 4 ( http://json-schema.org/ ). Type object Schema Property Type Description USDref string USDschema string additionalItems JSONSchemaPropsOrBool additionalProperties JSONSchemaPropsOrBool allOf array (undefined) anyOf array (undefined) default JSON default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. definitions object (undefined) dependencies object (undefined) description string enum array (JSON) example JSON exclusiveMaximum boolean exclusiveMinimum boolean externalDocs ExternalDocumentation format string format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})USD with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}USD - hexcolor: an hexadecimal color code like " FFFFFF: following the regex ^ ?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})USD - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. id string items JSONSchemaPropsOrArray maxItems integer maxLength integer maxProperties integer maximum number minItems integer minLength integer minProperties integer minimum number multipleOf number not JSONSchemaProps nullable boolean oneOf array (undefined) pattern string patternProperties object (undefined) properties object (undefined) required array (string) title string type string uniqueItems boolean x-kubernetes-embedded-resource boolean x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata). x-kubernetes-int-or-string boolean x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns: 1) anyOf: - type: integer - type: string 2) allOf: - anyOf: - type: integer - type: string - ... zero or more x-kubernetes-list-map-keys array (string) x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type map by specifying the keys used as the index of the map. This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported). The properties specified must either be required or have a default value, to ensure those properties are present for all list items. x-kubernetes-list-type string x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values: 1) atomic : the list is treated as a single entity, like a scalar. Atomic lists will be entirely replaced when updated. This extension may be used on any type of list (struct, scalar, ... ). 2) set : Sets are lists that must not have multiple items with the same value. Each value must be a scalar, an object with x-kubernetes-map-type atomic or an array with x-kubernetes-list-type atomic . 3) map : These lists are like maps in that their elements have a non-index key used to identify them. Order is preserved upon merge. The map tag must only be used on a list with elements of type object. Defaults to atomic for arrays. x-kubernetes-map-type string x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values: 1) granular : These maps are actual maps (key-value pairs) and each fields are independent from each other (they can each be manipulated by separate actors). This is the default behaviour for all maps. 2) atomic : the list is treated as a single entity, like a scalar. Atomic maps will be entirely replaced when updated. x-kubernetes-preserve-unknown-fields boolean x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. x-kubernetes-validations array (ValidationRule) x-kubernetes-validations describes a list of validation rules written in the CEL expression language. This field is an alpha-level. Using this field requires the feature gate CustomResourceValidationExpressions to be enabled. 4.1.167. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaPropsOrArray schema Description JSONSchemaPropsOrArray represents a value that can either be a JSONSchemaProps or an array of JSONSchemaProps. Mainly here for serialization purposes. Type `` 4.1.168. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaPropsOrBool schema Description JSONSchemaPropsOrBool represents JSONSchemaProps or a boolean value. Defaults to true for the boolean property. Type `` 4.1.169. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaPropsOrStringArray schema Description JSONSchemaPropsOrStringArray represents a JSONSchemaProps or a string array. Type `` 4.1.170. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.ValidationRule schema Description ValidationRule describes a validation rule written in the CEL expression language. Type object Required rule Schema Property Type Description message string Message represents the message displayed when validation fails. The message is required if the Rule contains line breaks. The message must not contain line breaks. If unset, the message is "failed rule: {Rule}". e.g. "must be a URL with the host matching spec.host" messageExpression string MessageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a rule, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the rule; the only difference is the return type. Example: "x must be less than max ("string(self.max)")" rule string Rule represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec The Rule is scoped to the location of the x-kubernetes-validations extension in the schema. The self variable in the CEL expression is bound to the scoped value. Example: - Rule scoped to the root of a resource with a status subresource: {"rule": "self.status.actual ⇐ self.spec.maxDesired"} If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via self.field and field presence can be checked via has(self.field) . Null valued fields are treated as absent fields in CEL expressions. If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via self[mapKey] , map containment can be checked via mapKey in self and all entries of the map are accessible via CEL macros and functions such as self.all(... ) . If the Rule is scoped to an array, the elements of the array are accessible via self[i] and also by macros and functions. If the Rule is scoped to a scalar, self is bound to the scalar value. Examples: - Rule scoped to a map of objects: {"rule": "self.components['Widget'].priority < 10"} - Rule scoped to a list of integers: {"rule": "self.values.all(value, value >= 0 && value < 100)"} - Rule scoped to a string value: {"rule": "self.startsWith('kube')"} The apiVersion , kind , metadata.name and metadata.generateName are always accessible from the root of the object and from any x-kubernetes-embedded-resource annotated objects. No other metadata properties are accessible. Unknown data preserved in custom resources via x-kubernetes-preserve-unknown-fields is not accessible in CEL expressions. This includes: - Unknown field values that are preserved by object schemas with x-kubernetes-preserve-unknown-fields. - Object properties where the property schema is of an "unknown type". An "unknown type" is recursively defined as: - A schema with no type and x-kubernetes-preserve-unknown-fields set to true - An array where the items schema is of an "unknown type" - An object where the additionalProperties schema is of an "unknown type" Only property names of the form [a-zA-Z_.-/][a-zA-Z0-9_.-/]* are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - ' ' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are: "true", "false", "null", "in", "as", "break", "const", "continue", "else", "for", "function", "if", "import", "let", "loop", "package", "namespace", "return". Examples: - Rule accessing a property named "namespace": {"rule": self.__namespace__ 0 } - Rule accessing a property named "x-prop": {"rule": self.x__dash__prop 0 } - Rule accessing a property named "redact d": {"rule": self.redact__underscores__d 0 } Equality on arrays with x-kubernetes-list-type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: - 'set': X + Y performs a union where the array positions of all elements in X are preserved and non-intersecting elements in Y are appended, retaining their partial order. - 'map': X + Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect. Elements in Y with non-intersecting keys are appended, retaining their partial order. 4.1.171. io.k8s.apimachinery.pkg.api.resource.Quantity schema Description Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: <quantity> ::= <signedNumber><suffix> <digit> ::= 0 | 1 | ... | 9 <digits> ::= <digit> | <digit><digits> <number> ::= <digits> | <digits>.<digits> | <digits>. | .<digits> <sign> ::= "+" | "-" <signedNumber> ::= <number> | <sign><number> <suffix> ::= <binarySI> | <decimalExponent> | <decimalSI> <binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei <decimalSI> ::= m | "" | k | M | G | T | P | E <decimalExponent> ::= "e" <signedNumber> | "E" <signedNumber> No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Type string 4.1.172. io.k8s.apimachinery.pkg.apis.meta.v1.Condition schema Description Condition contains details for one aspect of the current state of this API Resource. Type object Required type status lastTransitionTime reason message Schema Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 4.1.173. io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions schema Description DeleteOptions may be provided when deleting an API object. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dryRun array (string) When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. preconditions Preconditions Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. 4.1.174. io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1 schema Description FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:<name>', where <name> is the name of a field in a struct, or key in a map 'v:<value>', where <value> is the exact json formatted value of a list item 'i:<index>', where <index> is position of a item in a list 'k:<keys>', where <keys> is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff Type object 4.1.175. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.176. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelectorRequirement schema Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Schema Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.177. io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta schema Description ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}. Type object Schema Property Type Description continue string continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. remainingItemCount integer remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is estimating the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. resourceVersion string String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. 4.1.178. io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry schema Description ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Type object Schema Property Type Description apiVersion string APIVersion defines the version of this resource that this field set applies to. The format is "group/version" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: "FieldsV1" fieldsV1 FieldsV1 FieldsV1 holds the first JSON version format as described in the "FieldsV1" type. manager string Manager is an identifier of the workflow managing these fields. operation string Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'. subresource string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time is the timestamp of when the ManagedFields entry was added. The timestamp will also be updated if a field is added, the manager changes any of the owned fields value or removes a field. The timestamp does not update when a field is removed from the entry because another manager took it over. 4.1.179. io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime schema Description MicroTime is version of Time with microsecond level precision. Type string 4.1.180. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 4.1.181. io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference schema Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Schema Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 4.1.182. io.k8s.apimachinery.pkg.apis.meta.v1.Patch schema Description Patch is provided to give a concrete name and type to the Kubernetes PATCH request body. Type object 4.1.183. io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions schema Description Preconditions must be fulfilled before an operation (update, delete, etc.) is carried out. Type object Schema Property Type Description resourceVersion string Specifies the target ResourceVersion uid string Specifies the target UID. 4.1.184. io.k8s.apimachinery.pkg.apis.meta.v1.Status schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 4.1.185. io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause schema Description StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered. Type object Schema Property Type Description field string The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional. Examples: "name" - the field "name" on the current resource "items[0].name" - the field "name" on the first array entry in "items" message string A human-readable description of the cause of the error. This field may be presented as-is to a reader. reason string A machine-readable description of the cause of the error. If this value is empty there is no information available. 4.1.186. io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails schema Description StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined. Type object Schema Property Type Description causes array (StatusCause) The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes. group string The group attribute of the resource associated with the status StatusReason. kind string The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described). retryAfterSeconds integer If specified, the time in seconds before the operation should be retried. Some errors may indicate the client must take an alternate action - for those errors this field may indicate how long to wait before taking the alternate action. uid string UID of the resource. (when there is a single resource which can be described). More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 4.1.187. io.k8s.apimachinery.pkg.apis.meta.v1.Time schema Description Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. Type string 4.1.188. io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent schema Description Event represents a single event to a watched resource. Type object Required type object Schema Property Type Description object RawExtension Object is: * If Type is Added or Modified: the new state of the object. * If Type is Deleted: the state of the object immediately before deletion. * If Type is Error: *Status is recommended; other types may make sense depending on context. type string 4.1.189. io.k8s.apimachinery.pkg.runtime.RawExtension schema Description RawExtension is used to hold extensions in external versions. To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types. So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.) Type object 4.1.190. io.k8s.apimachinery.pkg.util.intstr.IntOrString schema Description IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number. Type string 4.1.191. io.k8s.kube-aggregator.pkg.apis.apiregistration.v1.APIServiceList schema Description APIServiceList is a list of APIService objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIService) Items is the list of APIService kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.192. io.k8s.migration.v1alpha1.StorageVersionMigrationList schema Description StorageVersionMigrationList is a list of StorageVersionMigration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageVersionMigration) List of storageversionmigrations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.193. io.k8s.storage.snapshot.v1.VolumeSnapshotClassList schema Description VolumeSnapshotClassList is a list of VolumeSnapshotClass Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotClass) List of volumesnapshotclasses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.194. io.k8s.storage.snapshot.v1.VolumeSnapshotContentList schema Description VolumeSnapshotContentList is a list of VolumeSnapshotContent Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotContent) List of volumesnapshotcontents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.195. io.k8s.storage.snapshot.v1.VolumeSnapshotList schema Description VolumeSnapshotList is a list of VolumeSnapshot Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshot) List of volumesnapshots. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.196. io.openshift.internal.security.v1.RangeAllocationList schema Description RangeAllocationList is a list of RangeAllocation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RangeAllocation) List of rangeallocations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.197. io.openshift.route.v1.RouteList schema Description RouteList is a list of Route Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Route) List of routes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.198. io.openshift.security.v1.SecurityContextConstraintsList schema Description SecurityContextConstraintsList is a list of SecurityContextConstraints Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (SecurityContextConstraints) List of securitycontextconstraints. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.1.199. io.topolvm.v1.LogicalVolumeList schema Description LogicalVolumeList is a list of LogicalVolume Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LogicalVolume) List of logicalvolumes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [
"(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)",
"(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)",
"(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/api-object-reference-1 |
Chapter 6. Configuring the SSSD Container to Provide Identity and Authentication Services on Atomic Host | Chapter 6. Configuring the SSSD Container to Provide Identity and Authentication Services on Atomic Host As a system administrator, you can use SSSD in a container to provide external identity, authentication, and authorization services for the Atomic Host system. This chapter describes how to run the SSSD container as privileged , which enables users from external identity sources (Identity Management or Active Directory) to leverage the services running on the Atomic host itself. Alternatively, you can run the SSSD container as unprivileged , which enables users from external identity sources (Identity Management or Active Directory) to leverage the services running in other containers on the Atomic Host. This is covered in Chapter 7, Deploying SSSD Containers With Different Configurations . Before you start, see: Section 6.1, "Prerequisites" To enroll the Atomic Host to an Identity Management server, see: Section 6.2, "Enrolling to an Identity Management Domain Using a Privileged SSSD Container" To enroll the Atomic Host to Active Directory, see: Section 6.3, "Joining an Active Directory Domain Using an SSSD Container" 6.1. Prerequisites Upgrade the Atomic Host system before installing the container. See Upgrading and Downgrading in the Red Hat Enterprise Linux Atomic Host 7 Installation and Configuration Guide . 6.2. Enrolling to an Identity Management Domain Using a Privileged SSSD Container This procedure describes how to install an SSSD container and configure it for enrollment against an Identity Management server. During the installation: Various configuration and data are copied into the container. The ipa-client-install utility for configuring an Identity Management client starts. After a successful enrollment into the Identity Management domain, the configuration and data are copied back to the Atomic Host system. Prerequisites You need one of the following: A random password for one-time client enrollment of the Atomic Host system to the Identity Management domain. To generate the password, add the Atomic Host system as an Identity Management host on the Identity Management server, for example: For details, see Installing a Client in the Linux Domain Identity, Authentication, and Policy Guide . Credentials of an Identity Management user allowed to enroll clients. By default, this is the admin user. Procedure Start the sssd container installation by using the atomic install command, and provide the random password or credentials of an IdM user that is allowed to enroll new hosts. In most cases, this is the admin user. The atomic install rhel7/sssd command accepts standard ipa-client-install options. Depending on your configuration, you might need to provide additional information using these options. For example, if ipa-client-install cannot determine the host name of your server and the domain name, use the --server and --domain options: Note You can also pass options to ipa-client-install by storing them to the /etc/sssd/ipa-client-install-options file on the Atomic Host before running atomic install . For example, the file can contain: Start SSSD in the container by using one of the following commands: Optional. Confirm that the container is running: Optional. Confirm that SSSD on the Atomic Host resolves identities from the Identity Management domain. Obtain a Kerberos ticket for an Identity Management user, and log in to the Atomic Host by using the ssh utility. Use the id utility to verify that you are logged in as the intended user: Use the hostname utility to verify that you are logged in to the Atomic Host system: 6.3. Joining an Active Directory Domain Using an SSSD Container This procedure describes how to install an SSSD container and configure it to join the Atomic Host system to Active Directory. Procedure Save the password of a user allowed to enroll systems to the Active Directory domain, such as the Administrator, in the /etc/sssd/realm-join-password file on the Atomic Host system: Providing the password in the file is necessary because the realm join command does not accept the password as a command-line parameter. Note If you want to specify a custom container image name later with the atomic install command to use instead of the default name ( sssd ), add the custom name to the path of the file: /etc/sssd/<custom_container_name>/realm-join-password . Start the sssd container installation by using the atomic install command, and specify the realm that you want to join. If you are using the default Administrator user account for the operation: If you are using another user account, specify it with the --user option: Start SSSD in the container by using one of the following commands: Optional. Confirm that the container is running: Optional. On the Atomic Host system, confirm that SSSD resolves identities from the Active Directory domain: Additional Resources For details on the realmd utility, see Using realmd to Connect to an Active Directory Domain in the Windows Integration Guide or the realm(8) man page. | [
"ipa host-add <atomic.example.com> --random [... output truncated ...] Random password: 4Re[>5]OBUSD3K(USDqYs:M&}B [... output truncated ...]",
"atomic install rhel7/sssd --password \" 4Re[>5]OBUSD3K(USDqYs:M&}B \" [... output truncated ...] Service sssd.service configured to run SSSD container. [... output truncated ...]",
"atomic install rhel7/sssd -p admin -w <admin_password> [... output truncated ...] Service sssd.service configured to run SSSD container. [... output truncated ...]",
"atomic install rhel7/sssd --password \" 4Re[>5]OBUSD3K(USDqYs:M&}B \" --server <server.example.com> --domain <example.com>",
"--password=4Re[>5]OBUSD3K(USDqYs:M&}B --server=server.example.com --domain=example.com",
"atomic run rhel7/sssd",
"systemctl start sssd",
"docker ps CONTAINER ID IMAGE 5859b9366f0f rhel7/sssd",
"atomic run sssd kinit <idm_user> ssh <idm_user>@<atomic.example.com>",
"id uid=1215800001(idm_user) gid=1215800001(idm_user) groups=1215800001(idm_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"hostname atomic.example.com",
"echo <password> > /etc/sssd/realm-join-password",
"atomic install rhel7/sssd realm join <ad.example.com> docker run --rm=true --privileged --net=host -v /:/host -e NAME=sssd -e IMAGE=rhel7/sssd -e HOST=/host rhel7/sssd /bin/install.sh realm join ad.example.com Initializing configuration context from host Password for Administrator: Copying new configuration to host Service sssd.service configured to run SSSD container.",
"atomic install rhel7/sssd realm join --user <user_name> <ad.example.com>",
"atomic run rhel7/sssd",
"systemctl start sssd",
"docker ps CONTAINER ID IMAGE 5859b9366f0f rhel7/sssd",
"id administrator@<ad.example.com> uid=1397800500([email protected]) gid=1397800513(domain [email protected])"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/using_containerized_identity_management_services/configuring-the-sssd-container-to-provide-identity-and-authentication-services-on-atomic-host |
Chapter 35. Setting up an 802.1x network authentication service for LAN clients by using hostapd with FreeRADIUS backend | Chapter 35. Setting up an 802.1x network authentication service for LAN clients by using hostapd with FreeRADIUS backend The IEEE 802.1X standard defines secure authentication and authorization methods to protect networks from unauthorized clients. By using the hostapd service and FreeRADIUS, you can provide network access control (NAC) in your network. Note Red Hat supports only FreeRADIUS with Red Hat Identity Management (IdM) as the backend source of authentication. In this documentation, the RHEL host acts as a bridge to connect different clients with an existing network. However, the RHEL host grants only authenticated clients access to the network. 35.1. Prerequisites A clean installation of the freeradius and freeradius-ldap packages. If the packages are already installed, remove the /etc/raddb/ directory, uninstall and then install the packages again. Do not reinstall the packages by using the dnf reinstall command, because the permissions and symbolic links in the /etc/raddb/ directory are then different. The host on which you want to configure FreeRADIUS is a client in an IdM domain . 35.2. Setting up the bridge on the authenticator A network bridge is a link-layer device which forwards traffic between hosts and networks based on a table of MAC addresses. If you set up RHEL as an 802.1X authenticator, add both the interfaces on which to perform authentication and the LAN interface to the bridge. Prerequisites The server has multiple Ethernet interfaces. Procedure If the bridge interface does not exist, create it: Assign the Ethernet interfaces to the bridge: Enable the bridge to forward extensible authentication protocol over LAN (EAPOL) packets: Configure the connection to automatically activate the ports: Activate the connection: Verification Display the link status of Ethernet devices that are ports of a specific bridge: Verify if forwarding of EAPOL packets is enabled on the br0 device: If the command returns 0x8 , forwarding is enabled. Additional resources nm-settings(5) man page on your system 35.3. Configuring FreeRADIUS to authenticate network clients securely by using EAP FreeRADIUS supports different methods of the Extensible authentication protocol (EAP). However, for a supported and secure scenario, use EAP-TTLS (tunneled transport layer security). With EAP-TTLS, the clients use a secure TLS connection as the outer authentication protocol to set up the tunnel. The inner authentication then uses LDAP to authenticate to Identity Management. To use EAP-TTLS, you need a TLS server certificate. Note The default FreeRADIUS configuration files serve as documentation and describe all parameters and directives. If you want to disable certain features, comment them out instead of removing the corresponding parts in the configuration files. This enables you to preserve the structure of the configuration files and the included documentation. Prerequisites You installed the freeradius and freeradius-ldap packages. The configuration files in the /etc/raddb/ directory are unchanged and as provided by the freeradius packages. The host is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. Procedure Create a private key and request a certificate from IdM: The certmonger service stores the private key in the /etc/pki/tls/private/radius.key file and the certificate in the /etc/pki/tls/certs/radius.pem file, and it sets secure permissions. Additionally, certmonger will monitor the certificate, renew it before it expires, and restart the radiusd service after the certificate was renewed. Verify that the CA successfully issued the certificate: Create the /etc/raddb/certs/dh file with Diffie-Hellman (DH) parameters. For example, to create a DH file with a 2048 bits prime, enter: For security reasons, do not use a DH file with less than a 2048 bits prime. Depending on the number of bits, the creation of the file can take several minutes. Edit the /etc/raddb/mods-available/eap file: Configure the TLS-related settings in the tls-config tls-common directive: Set the default_eap_type parameter in the eap directive to ttls : Comment out the md5 directives to disable the insecure EAP-MD5 authentication method: Note that, in the default configuration file, other insecure EAP authentication methods are commented out by default. Edit the /etc/raddb/sites-available/default file, and comment out all authentication methods other than eap : This leaves only EAP enabled for the outer authentication and disables plain-text authentication methods. Edit the /etc/raddb/sites-available/inner-tunnel file, and make the following changes: Comment out the -ldap entry and add the ldap module configuration to the authorize directive: Uncomment the LDAP authentication type in the authenticate directive: Enable the ldap module: Edit the /etc/raddb/mods-available/ldap file, and make the following changes: In the ldap directive, set the IdM LDAP server URL and the base distinguished name (DN): Specify the ldaps protocol in the server URL to use TLS-encrypted connections between the FreeRADIUS host and the IdM server. In the ldap directive, enable TLS certificate validation of the IdM LDAP server: Edit the /etc/raddb/clients.conf file: Set a secure password in the localhost and localhost_ipv6 client directives: Add a client directive for the network authenticator: Optional: If other hosts should also be able to access the FreeRADIUS service, add client directives for them as well, for example: The ipaddr parameter accepts IPv4 and IPv6 addresses, and you can use the optional classless inter-domain routing (CIDR) notation to specify ranges. However, you can set only one value in this parameter. For example, to grant access to both an IPv4 and IPv6 address, you must add two client directives. Use a descriptive name for the client directive, such as a hostname or a word that describes where the IP range is used. Verify the configuration files: Open the RADIUS ports in the firewalld service: Enable and start the radiusd service: Verification Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator Troubleshooting If the radiusd service fails to start, verify that you can resolve the IdM server host name: For other problems, run radiusd in debug mode: Stop the radiusd service: Start the service in debug mode: Perform authentication tests on the FreeRADIUS host, as referenced in the Verification section. steps Disable no longer required authentication methods and other features you do not use. 35.4. Configuring hostapd as an authenticator in a wired network The host access point daemon ( hostapd ) service can act as an authenticator in a wired network to provide 802.1X authentication. For this, the hostapd service requires a RADIUS server that authenticates the clients. The hostapd service provides an integrated RADIUS server. However, use the integrated RADIUS server only for testing purposes. For production environments, use FreeRADIUS server, which supports additional features, such as different authentication methods and access control. Important The hostapd service does not interact with the traffic plane. The service acts only as an authenticator. For example, use a script or service that uses the hostapd control interface to allow or deny traffic based on the result of authentication events. Prerequisites You installed the hostapd package. The FreeRADIUS server has been configured, and it is ready to authenticate clients. Procedure Create the /etc/hostapd/hostapd.conf file with the following content: For further details about the parameters used in this configuration, see their descriptions in the /usr/share/doc/hostapd/hostapd.conf example configuration file. Enable and start the hostapd service: Verification Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator Troubleshooting If the hostapd service fails to start, verify that the bridge interface you use in the /etc/hostapd/hostapd.conf file is present on the system: For other problems, run hostapd in debug mode: Stop the hostapd service: Start the service in debug mode: Perform authentication tests on the FreeRADIUS host, as referenced in the Verification section. Additional resources hostapd.conf(5) man page on your system /usr/share/doc/hostapd/hostapd.conf file 35.5. Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator To test if authentication by using extensible authentication protocol (EAP) over tunneled transport layer security (EAP-TTLS) works as expected, run this procedure: After you set up the FreeRADIUS server After you set up the hostapd service as an authenticator for 802.1X network authentication. The output of the test utilities used in this procedure provide additional information about the EAP communication and help you to debug problems. Prerequisites When you want to authenticate to: A FreeRADIUS server: The eapol_test utility, provided by the hostapd package, is installed. The client, on which you run this procedure, has been authorized in the FreeRADIUS server's client databases. An authenticator, the wpa_supplicant utility, provided by the same-named package, is installed. You stored the certificate authority (CA) certificate in the /etc/ipa/ca.cert file. Procedure Optional: Create a user in Identity Management (IdM): Create the /etc/wpa_supplicant/wpa_supplicant-TTLS.conf file with the following content: To authenticate to: A FreeRADIUS server, enter: The -a option defines the IP address of the FreeRADIUS server, and the -s option specifies the password for the host on which you run the command in the FreeRADIUS server's client configuration. An authenticator, enter: The -i option specifies the network interface name on which wpa_supplicant sends out extended authentication protocol over LAN (EAPOL) packets. For more debugging information, pass the -d option to the command. Additional resources /usr/share/doc/wpa_supplicant/wpa_supplicant.conf file 35.6. Blocking and allowing traffic based on hostapd authentication events The hostapd service does not interact with the traffic plane. The service acts only as an authenticator. However, you can write a script to allow and deny traffic based on the result of authentication events. Important This procedure is not supported and is no enterprise-ready solution. It only demonstrates how to block or allow traffic by evaluating events retrieved by hostapd_cli . When the 802-1x-tr-mgmt systemd service starts, RHEL blocks all traffic on the listen port of hostapd except extensible authentication protocol over LAN (EAPOL) packets and uses the hostapd_cli utility to connect to the hostapd control interface. The /usr/local/bin/802-1x-tr-mgmt script then evaluates events. Depending on the different events received by hostapd_cli , the script allows or blocks traffic for MAC addresses. Note that, when the 802-1x-tr-mgmt service stops, all traffic is automatically allowed again. Perform this procedure on the hostapd server. Prerequisites The hostapd service has been configured, and the service is ready to authenticate clients. Procedure Create the /usr/local/bin/802-1x-tr-mgmt file with the following content: #!/bin/sh TABLE="tr-mgmt-USD{1}" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname "br0" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge "USDTABLE" printf "USDTABLE_DEF" | nft -f - echo "USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo "USD1: Allowed traffic from USD3" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo "USD1: Denied traffic from USD3" ;; allow_all) nft destroy table bridge "USDTABLE" echo "USD1: Allowed all bridge traffice again" ;; NOTANEVENT) echo "USD0 was called incorrectly, usage: USD0 interface event [mac_address]" ;; esac Create the /etc/systemd/system/[email protected] systemd service file with the following content: Reload systemd: Enable and start the 802-1x-tr-mgmt service with the interface name hostapd is listening on: Verification Authenticate with a client to the network. See Testing EAP-TTLS authentication against a FreeRADIUS server or authenticator . Additional resources systemd.service(5) man page on your system | [
"nmcli connection add type bridge con-name br0 ifname br0",
"nmcli connection add type ethernet port-type bridge con-name br0-port1 ifname enp1s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port2 ifname enp7s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port3 ifname enp8s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port4 ifname enp9s0 controller br0",
"nmcli connection modify br0 group-forward-mask 8",
"nmcli connection modify br0 connection.autoconnect-ports 1",
"nmcli connection up br0",
"ip link show master br0 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff",
"cat /sys/class/net/br0/bridge/group_fwd_mask 0x8",
"ipa-getcert request -w -k /etc/pki/tls/private/radius.key -f /etc/pki/tls/certs/radius.pem -o \"root:radiusd\" -m 640 -O \"root:radiusd\" -M 640 -T caIPAserviceCert -C 'systemctl restart radiusd.service' -N freeradius.idm.example.com -D freeradius.idm.example.com -K radius/ freeradius.idm.example.com",
"ipa-getcert list -f /etc/pki/tls/certs/radius.pem Number of certificates and requests being tracked: 1. Request ID '20240918142211': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/radius.key' certificate: type=FILE,location='/etc/pki/tls/certs/radius.crt'",
"openssl dhparam -out /etc/raddb/certs/dh 2048",
"eap { tls-config tls-common { private_key_file = /etc/pki/tls/private/radius.key certificate_file = /etc/pki/tls/certs/radius.pem ca_file = /etc/ipa/ca.crt } }",
"eap { default_eap_type = ttls }",
"eap { # md5 { # } }",
"authenticate { # Auth-Type PAP { # pap # } # Auth-Type CHAP { # chap # } # Auth-Type MS-CHAP { # mschap # } # mschap # digest }",
"authorize { #-ldap ldap if ((ok || updated) && User-Password) { update { control:Auth-Type := ldap } } }",
"authenticate { Auth-Type LDAP { ldap } }",
"ln -s /etc/raddb/mods-available/ldap /etc/raddb/mods-enabled/ldap",
"ldap { server = 'ldaps:// idm_server.idm.example.com ' base_dn = 'cn=users,cn=accounts, dc=idm,dc=example,dc=com ' }",
"tls { require_cert = 'demand' }",
"client localhost { ipaddr = 127.0.0.1 secret = localhost_client_password } client localhost_ipv6 { ipv6addr = ::1 secret = localhost_client_password }",
"client hostapd.example.org { ipaddr = 192.0.2.2/32 secret = hostapd_client_password }",
"client <hostname_or_description> { ipaddr = <IP_address_or_range> secret = <client_password> }",
"radiusd -XC Configuration appears to be OK",
"firewall-cmd --permanent --add-service=radius firewall-cmd --reload",
"systemctl enable --now radiusd",
"host -v idm_server.idm.example.com",
"systemctl stop radiusd",
"radiusd -X Ready to process requests",
"General settings of hostapd =========================== Control interface settings ctrl_interface= /var/run/hostapd ctrl_interface_group= wheel Enable logging for all modules logger_syslog= -1 logger_stdout= -1 Log level logger_syslog_level= 2 logger_stdout_level= 2 Wired 802.1X authentication =========================== Driver interface type driver=wired Enable IEEE 802.1X authorization ieee8021x=1 Use port access entry (PAE) group address (01:80:c2:00:00:03) when sending EAPOL frames use_pae_group_addr=1 Network interface for authentication requests interface= br0 RADIUS client configuration =========================== Local IP address used as NAS-IP-Address own_ip_addr= 192.0.2.2 Unique NAS-Identifier within scope of RADIUS server nas_identifier= hostapd.example.org RADIUS authentication server auth_server_addr= 192.0.2.1 auth_server_port= 1812 auth_server_shared_secret= hostapd_client_password RADIUS accounting server acct_server_addr= 192.0.2.1 acct_server_port= 1813 acct_server_shared_secret= hostapd_client_password",
"systemctl enable --now hostapd",
"ip link show br0",
"systemctl stop hostapd",
"hostapd -d /etc/hostapd/hostapd.conf",
"ipa user-add --first \" Test \" --last \" User \" idm_user --password",
"ap_scan=0 network={ eap=TTLS eapol_flags=0 key_mgmt=IEEE8021X # Anonymous identity (sent in unencrypted phase 1) # Can be any string anonymous_identity=\" anonymous \" # Inner authentication (sent in TLS-encrypted phase 2) phase2=\"auth= PAP \" identity=\" idm_user \" password=\" idm_user_password \" # CA certificate to validate the RADIUS server's identity ca_cert=\" /etc/ipa/ca.crt \" }",
"eapol_test -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -a 192.0.2.1 -s <client_password> EAP: Status notification: remote certificate verification (param=success) CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully SUCCESS",
"wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -D wired -i enp0s31f6 enp0s31f6: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully",
"#!/bin/sh TABLE=\"tr-mgmt-USD{1}\" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname \"br0\" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge \"USDTABLE\" printf \"USDTABLE_DEF\" | nft -f - echo \"USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication\" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Allowed traffic from USD3\" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Denied traffic from USD3\" ;; allow_all) nft destroy table bridge \"USDTABLE\" echo \"USD1: Allowed all bridge traffice again\" ;; NOTANEVENT) echo \"USD0 was called incorrectly, usage: USD0 interface event [mac_address]\" ;; esac",
"[Unit] Description=Example 802.1x traffic management for hostapd After=hostapd.service After=sys-devices-virtual-net-%i.device [Service] Type=simple ExecStartPre=bash -c '/usr/sbin/hostapd_cli ping | grep PONG' ExecStartPre=/usr/local/bin/802-1x-tr-mgmt %i block_all ExecStart=/usr/sbin/hostapd_cli -i %i -a /usr/local/bin/802-1x-tr-mgmt ExecStopPost=/usr/local/bin/802-1x-tr-mgmt %i allow_all [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable --now [email protected]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/assembly_setting-up-an-802-1x-network-authentication-service-for-lan-clients-using-hostapd-with-freeradius-backend_configuring-and-managing-networking |
Chapter 6. Installing | Chapter 6. Installing 6.1. Preparing your cluster for OpenShift Virtualization Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements. Important You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration. FIPS mode If you install your cluster in FIPS mode , no additional setup is required for OpenShift Virtualization. IPv6 You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. ( BZ#2193267 ) 6.1.1. Hardware and operating system requirements Review the following hardware and operating system requirements for OpenShift Virtualization. Supported platforms On-premise bare metal servers Amazon Web Services bare metal instances. See Deploy OpenShift Virtualization on AWS Bare Metal Nodes for details. IBM Cloud Bare Metal Servers. See Deploy OpenShift Virtualization on IBM Cloud Bare Metal Nodes for details. Important Installing OpenShift Virtualization on AWS bare metal instances or on IBM Cloud Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Bare metal instances or servers offered by other cloud providers are not supported. CPU requirements Supported by Red Hat Enterprise Linux (RHEL) 8 Support for Intel 64 or AMD64 CPU extensions Intel VT or AMD-V hardware virtualization extensions enabled NX (no execute) flag enabled Storage requirements Supported by OpenShift Container Platform Warning If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. Operating system requirements Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes Note RHEL worker nodes are not supported. If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capabilities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See Configuring a required node affinity rule for more information. Additional resources About RHCOS . Red Hat Ecosystem Catalog for supported CPUs. Supported storage . 6.1.2. Physical resource overhead requirements OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance. Important The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. 6.1.2.1. Memory overhead Calculate the memory overhead values for OpenShift Virtualization by using the equations below. Cluster memory overhead Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. Virtual machine memory overhead 1 Required for the processes that run in the virt-launcher pod. 2 Number of virtual CPUs requested by the virtual machine. 3 Number of virtual graphics cards requested by the virtual machine. 4 Additional memory overhead: If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device. 6.1.2.2. CPU overhead Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup. Cluster CPU overhead OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads. Virtual machine CPU overhead If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. 6.1.2.3. Storage overhead Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment. Cluster storage overhead 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization. Virtual machine storage overhead Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself. 6.1.2.4. Example As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. 6.1.3. Object maximums You must consider the following tested object maximums when planning your cluster: OpenShift Container Platform object maximums . OpenShift Virtualization object maximums . 6.1.4. Restricted network environments If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks . If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub. 6.1.5. Live migration Live migration has the following requirements: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. 6.1.6. Snapshots and cloning See OpenShift Virtualization storage features for snapshot and cloning requirements. 6.1.7. Cluster high-availability options You can configure one of the following high-availability (HA) options for your cluster: Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks . Note In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes. Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node> . Note Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability. 6.2. Specifying nodes for OpenShift Virtualization components Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules. Note You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads. 6.2.1. About node placement for virtualization components You might want to customize where OpenShift Virtualization deploys its components to ensure that: Virtual machines only deploy on nodes that are intended for virtualization workloads. Operators only deploy on infrastructure nodes. Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization. 6.2.1.1. How to apply node placement rules to virtualization components You can specify node placement rules for a component by editing the corresponding object directly or by using the web console. For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console. For components that the OpenShift Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OpenShift Virtualization installation. For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console. Warning You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. Depending on the object, you can use one or more of the following rule types: nodeSelector Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied. tolerations Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint. 6.2.1.2. Node placement in the OLM Subscription object To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: "stable" config: 1 1 The config field supports nodeSelector and tolerations , but it does not support affinity . 6.2.1.3. Node placement in the HyperConverged object To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 ... workloads: nodePlacement: ... 1 The nodePlacement fields support nodeSelector , affinity , and tolerations fields. 6.2.1.4. Node placement in the HostPathProvisioner object You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: 1 1 The workload field supports nodeSelector , affinity , and tolerations fields. 6.2.1.5. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints Installing OpenShift Virtualization using the CLI Installing OpenShift Virtualization using the web console Configuring local storage for virtual machines 6.2.2. Example manifests The following example YAML files use nodePlacement , affinity , and tolerations objects to customize node placement for OpenShift Virtualization components. 6.2.2.1. Operator Lifecycle Manager Subscription object 6.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object In this example, nodeSelector is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value . apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: "stable" config: nodeSelector: example.io/example-infra-key: example-infra-value 6.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: "stable" config: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 6.2.2.2. HyperConverged object 6.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 6.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 6.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 6.2.2.3. HostPathProvisioner object 6.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 6.3. Installing OpenShift Virtualization using the web console Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can use the OpenShift Container Platform 4.12 web console to subscribe to and deploy the OpenShift Virtualization Operators. 6.3.1. Installing the OpenShift Virtualization Operator You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console. Prerequisites Install OpenShift Container Platform 4.12 on your cluster. Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Operators OperatorHub . In the Filter by keyword field, type Virtualization . Select the {CNVOperatorDisplayName} tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page: Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. For Installed Namespace , ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist. Warning Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail. For Approval Strategy , it is highly recommended that you select Automatic , which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel. While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic . Warning Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported. Click Install to make the Operator available to the openshift-cnv namespace. When the Operator installs successfully, click Create HyperConverged . Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components. Click Create to launch OpenShift Virtualization. Verification Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running . After all the pods display the Running state, you can use OpenShift Virtualization. 6.3.2. steps You might want to additionally configure the following components: The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 6.4. Installing OpenShift Virtualization using the CLI Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster. Note To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules . 6.4.1. Prerequisites Install OpenShift Container Platform 4.12 on your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. 6.4.2. Subscribing to the OpenShift Virtualization catalog by using the CLI Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators. To subscribe, configure Namespace , OperatorGroup , and Subscription objects by applying a single manifest to your cluster. Procedure Create a YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: "stable" 1 1 Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. Create the required Namespace , OperatorGroup , and Subscription objects for OpenShift Virtualization by running the following command: USD oc apply -f <file name>.yaml Note You can configure certificate rotation parameters in the YAML file. 6.4.3. Deploying the OpenShift Virtualization Operator by using the CLI You can deploy the OpenShift Virtualization Operator by using the oc CLI. Prerequisites An active subscription to the OpenShift Virtualization catalog in the openshift-cnv namespace. Procedure Create a YAML file that contains the following manifest: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: Deploy the OpenShift Virtualization Operator by running the following command: USD oc apply -f <file_name>.yaml Verification Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command: USD watch oc get csv -n openshift-cnv The following output displays if deployment was successful: Example output NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.12.16 OpenShift Virtualization 4.12.16 Succeeded 6.4.4. steps You might want to additionally configure the following components: The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 6.5. Installing the virtctl client The virtctl client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, Windows, and macOS. 6.5.1. Installing the virtctl client on Linux, Windows, and macOS Download and install the virtctl client for your operating system. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Download virtctl link on the upper right corner of the page and download the virtctl client for your operating system. Install virtctl : For Linux: Decompress the archive file: USD tar -xvf <virtctl-version-distribution.arch>.tar.gz Run the following command to make the virtctl binary executable: USD chmod +x <path/virtctl-file-name> Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: USD echo USDPATH Set the KUBECONFIG environment variable: USD export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig For Windows: Decompress the archive file. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client. Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: C:\> path For macOS: Decompress the archive file. Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: echo USDPATH 6.5.2. Installing the virtctl as an RPM You can install the virtctl client on Red Hat Enterprise Linux (RHEL) as an RPM after enabling the OpenShift Virtualization repository. 6.5.2.1. Enabling OpenShift Virtualization repositories Enable the OpenShift Virtualization repository for your version of Red Hat Enterprise Linux (RHEL). Prerequisites Your system is registered to a Red Hat account with an active subscription to the "Red Hat Container Native Virtualization" entitlement. Procedure Enable the appropriate OpenShift Virtualization repository for your operating system by using the subscription-manager CLI tool. To enable the repository for RHEL 8, run: # subscription-manager repos --enable cnv-4.12-for-rhel-8-x86_64-rpms To enable the repository for RHEL 7, run: # subscription-manager repos --enable rhel-7-server-cnv-4.12-rpms 6.5.2.2. Installing the virtctl client using the yum utility Install the virtctl client from the kubevirt-virtctl package. Prerequisites You enabled an OpenShift Virtualization repository on your Red Hat Enterprise Linux (RHEL) system. Procedure Install the kubevirt-virtctl package: # yum install kubevirt-virtctl 6.5.3. Additional resources Using the CLI tools for OpenShift Virtualization. 6.6. Uninstalling OpenShift Virtualization You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources. 6.6.1. Uninstalling OpenShift Virtualization by using the web console You uninstall OpenShift Virtualization by using the web console to perform the following tasks: Delete the HyperConverged CR . Delete the OpenShift Virtualization Operator . Delete the openshift-cnv namespace . Delete the OpenShift Virtualization custom resource definitions (CRDs) . Important You must first delete all virtual machines , and virtual machine instances . You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. 6.6.1.1. Deleting the HyperConverged custom resource To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Select the OpenShift Virtualization Operator. Click the OpenShift Virtualization Deployment tab. Click the Options menu beside kubevirt-hyperconverged and select Delete HyperConverged . Click Delete in the confirmation window. 6.6.1.2. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 6.6.1.3. Deleting a namespace using the web console You can delete a namespace by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration Namespaces . Locate the namespace that you want to delete in the list of namespaces. On the far right side of the namespace listing, select Delete Namespace from the Options menu . When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field. Click Delete . 6.6.1.4. Deleting OpenShift Virtualization custom resource definitions You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration CustomResourceDefinitions . Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs. Click the Options menu beside each CRD and select Delete CustomResourceDefinition . 6.6.2. Uninstalling OpenShift Virtualization by using the CLI You can uninstall OpenShift Virtualization by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. Procedure Delete the HyperConverged custom resource: USD oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization Operator subscription: USD oc delete subscription kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization ClusterServiceVersion resource: USD oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Delete the OpenShift Virtualization namespace: USD oc delete namespace openshift-cnv List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option: USD oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Example output Delete the CRDs by running the oc delete crd command without the dry-run option: USD oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Additional resources Deleting virtual machines Deleting virtual machine instances | [
"Memory overhead per infrastructure node ~ 150 MiB",
"Memory overhead per worker node ~ 360 MiB",
"Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4",
"CPU overhead for infrastructure nodes ~ 4 cores",
"CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine",
"Aggregated storage overhead per node ~ 10 GiB",
"Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: \"stable\" config: 1",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 workloads: nodePlacement:",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.12.16 channel: \"stable\" 1",
"oc apply -f <file name>.yaml",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:",
"oc apply -f <file_name>.yaml",
"watch oc get csv -n openshift-cnv",
"NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.12.16 OpenShift Virtualization 4.12.16 Succeeded",
"tar -xvf <virtctl-version-distribution.arch>.tar.gz",
"chmod +x <path/virtctl-file-name>",
"echo USDPATH",
"export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig",
"C:\\> path",
"echo USDPATH",
"subscription-manager repos --enable cnv-4.12-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable rhel-7-server-cnv-4.12-rpms",
"yum install kubevirt-virtctl",
"oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv",
"oc delete subscription kubevirt-hyperconverged -n openshift-cnv",
"oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv",
"oc delete namespace openshift-cnv",
"oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv",
"customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)",
"oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/installing |
8.17. compat-openmpi | 8.17. compat-openmpi 8.17.1. RHBA-2013:1711 - compat-openmpi bug fix update Updated compat-openmpi packages that fix one bug are now available for Red Hat Enterprise Linux 6. The compat-openmpi packages contain shared libraries from earlier versions of Open Message Passing Interface (Open MPI). The libraries from releases have been compiled against the current version of Red Hat Enterprise Linux 6, and the packages enable earlier programs to keep functioning properly. Bug Fix BZ# 876315 The compat-openmpi packages previously did not ensure compatibility with earlier versions of the Open MPI shared libraries. Consequently, the users failed to run certain applications using Open MPI on Red Hat Enterprise Linux 6.3 and later if those applications were compiled against Open MPI versions used on Red Hat Enterprise Linux 6.2 and earlier. After this update, the compat-openmpi packages now maintain compatibility with earlier versions of Open MPI on Red Hat Enterprise Linux 6. Users of compat-openmpi are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/compat-openmpi |
Chapter 3. Managing secured clusters | Chapter 3. Managing secured clusters To secure a Kubernetes or an OpenShift Container Platform cluster, you must deploy Red Hat Advanced Cluster Security for Kubernetes (RHACS) services into the cluster. You can generate deployment files in the RHACS portal by navigating to the Platform Configuration Clusters view, or you can use the roxctl CLI. 3.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 3.2. Generating Sensor deployment files Generating files for Kubernetes systems Procedure Generate the required sensor configuration for your Kubernetes cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate k8s --name <cluster_name> --central "USDROX_ENDPOINT" Generating files for OpenShift Container Platform systems Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . Read the --help output to see other options that you might need to use depending on your system architecture. Verify that the endpoint you provide for --central can be reached from the cluster where you are deploying Red Hat Advanced Cluster Security for Kubernetes services. Important If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), follow these guidelines: Use the WebSocket Secure ( wss ) protocol. To use wss , prefix the address with wss:// , and Add the port number after the address, for example: USD roxctl sensor generate k8s --central wss://stackrox-central.example.com:443 3.3. Installing Sensor by using the sensor.sh script When you generate the Sensor deployment files, roxctl creates a directory called sensor-<cluster_name> in your working directory. The script to install Sensor is located in this directory. Procedure Run the sensor installation script to install Sensor: USD ./sensor- <cluster_name> /sensor.sh If you get a warning that you do not have the required permissions to install Sensor, follow the on-screen instructions, or contact your cluster administrator for help. 3.4. Downloading Sensor bundles for existing clusters Procedure Run the following command to download Sensor bundles for existing clusters by specifying a cluster name or ID : USD roxctl sensor get-bundle <cluster_name_or_id> 3.5. Deleting cluster integration Procedure Before deleting the cluster, ensure you have the correct cluster name that you want to remove from Central: USD roxctl cluster delete --name= <cluster_name> Important Deleting the cluster integration does not remove the RHACS services running in the cluster, depending on the installation method. You can remove the services by running the delete-sensor.sh script from the Sensor installation bundle. | [
"export ROX_ENDPOINT= <host:port> 1",
"roxctl sensor generate k8s --name <cluster_name> --central \"USDROX_ENDPOINT\"",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"roxctl sensor generate k8s --central wss://stackrox-central.example.com:443",
"./sensor- <cluster_name> /sensor.sh",
"roxctl sensor get-bundle <cluster_name_or_id>",
"roxctl cluster delete --name= <cluster_name>"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/roxctl_cli/managing-secured-clusters-1 |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-view role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: Note The following example contains some values that are specific to OpenShift Container Platform on AWS. USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n \ openshift-monitoring -o jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}' \ --data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \ --data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \ --data-urlencode 'match[]={__name__="os_image_url_override:sum"}' \ --data-urlencode 'match[]={__name__="openshift:openshift_network_operator_ipsec_state:info"}' 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Registering your disconnected cluster Register your disconnected OpenShift Container Platform cluster on the Red Hat Hybrid Cloud Console so that your cluster is not impacted by the consequences listed in the section named "Consequences of disabling remote health reporting". Important By registering your disconnected cluster, you can continue to report your subscription usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends associated with your subscription, so that you can use the returned information to better organize subscription allocations across all of your resources. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . You can log in to the Red Hat Hybrid Cloud Console. Procedure Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console. Optional: To access the Register disconnected cluster web page from the home page of the Red Hat Hybrid Cloud Console, go to the Cluster List navigation menu item and then select the Register cluster button. Enter your cluster's details in the provided fields on the Register disconnected cluster page. From the Subscription settings section of the page, select the subcription settings that apply to your Red Hat subscription offering. To register your disconnected cluster, select the Register cluster button. Additional resources Consequences of disabling remote health reporting How does the subscriptions service show my subscription data? (Getting Started with the Subscription Service) 4.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Enabling remote health reporting If you or your organization have disabled remote health reporting, you can enable this feature again. You can see that remote health reporting is disabled from the message "Insights not available" in the Status tile on the OpenShift Container Platform Web Console Overview page. To enable remote health reporting, you must Modify the global cluster pull secret with a new authorization token. Note Enabling remote health reporting enables both Insights Operator and Telemetry. 4.4.1. Modifying your global cluster pull secret to enable remote health reporting You can modify your existing global cluster pull secret to enable remote health reporting. If you have previously disabled remote health monitoring, you must first download a new pull secret with your console.openshift.com access token from Red Hat OpenShift Cluster Manager. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to OpenShift Cluster Manager. Procedure Navigate to https://console.redhat.com/openshift/downloads . From Tokens Pull Secret , click Download . The file pull-secret.txt containing your cloud.openshift.com access token in JSON format downloads: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": " <email_address> " } } } Download the global cluster pull secret to your local file system. USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull-secret Make a backup copy of your pull secret. USD cp pull-secret pull-secret-backup Open the pull-secret file in a text editor. Append the cloud.openshift.com JSON entry from pull-secret.txt into auths . Save the file. Update the secret in your cluster. oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret It may take several minutes for the secret to update and your cluster to begin reporting. Verification Navigate to the OpenShift Container Platform Web Console Overview page. Insights in the Status tile reports the number of issues found. 4.5. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.5.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.5.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.5.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.5.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.5.5.1. Filtering Insights advisor recommendations As an OpenShift Container Platform cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.5.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.5.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.5.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.5.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.6. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.6.1. Configuring Insights Operator Insights Operator configuration is a combination of the default Operator configuration and the configuration that is stored in either the insights-config ConfigMap object in the openshift-insights namespace, OR in the support secret in the openshift-config namespace. When a ConfigMap object or support secret exists, the contained attribute values override the default Operator configuration values. If both a ConfigMap object and a support secret exist, the Operator reads the ConfigMap object. The ConfigMap object does not exist by default, so an OpenShift Container Platform cluster administrator must create it. ConfigMap object configuration structure This example of an insights-config ConfigMap object ( config.yaml configuration) shows configuration options using standard YAML formatting. Configurable attributes and default values The table below describes the available configuration attributes: Note The insights-config ConfigMap object follows standard YAML formatting, wherein child values are below the parent attribute and indented two spaces. For the Obfuscation attribute, enter values as bulleted children of the parent attribute. Table 4.1. Insights Operator configurable attributes Attribute name Description Value type Default value Obfuscation: - networking Enables the global obfuscation of IP addresses and the cluster domain name. Boolean false Obfuscation: - workload_names Obfuscate data coming from the Deployment Validation Operator if it is installed. Boolean false sca: interval Specifies the frequency of the simple content access entitlements download. Time interval 8h sca: disabled Disables the simple content access entitlements download. Boolean false alerting: disabled Disables Insights Operator alerts to the cluster Prometheus instance. Boolean false httpProxy , httpsProxy , noProxy Set custom proxy for Insights Operator URL No default 4.6.1.1. Creating the insights-config ConfigMap object This procedure describes how to create the insights-config ConfigMap object for the Insights Operator to set custom configurations. Important Red Hat recommends you consult Red Hat Support before making changes to the default Insights Operator configuration. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with cluster-admin role. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click Create ConfigMap . Select Configure via: YAML view and enter your configuration preferences, for example apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false Optional: Select Form view and enter the necessary information that way. In the ConfigMap Name field, enter insights-config . In the Key field, enter config.yaml . For the Value field, either browse for a file to drag and drop into the field or enter your configuration parameters manually. Click Create and you can see the ConfigMap object and configuration information. 4.6.2. Understanding Insights Operator alerts The Insights Operator declares alerts through the Prometheus monitoring system to the Alertmanager. You can view these alerts in the Alerting UI in the OpenShift Container Platform web console by using one of the following methods: In the Administrator perspective, click Observe Alerting . In the Developer perspective, click Observe <project_name> Alerts tab. Currently, Insights Operator sends the following alerts when the conditions are met: Table 4.2. Insights Operator alerts Alert Description InsightsDisabled Insights Operator is disabled. SimpleContentAccessNotAvailable Simple content access is not enabled in Red Hat Subscription Management. InsightsRecommendationActive Insights has an active recommendation for the cluster. 4.6.2.1. Disabling Insights Operator alerts To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you create or edit the insights-config ConfigMap object. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. If the insights-config ConfigMap object does not exist, you must create it when you first add custom configurations. Note that configurations within the ConfigMap object take precedence over the default settings defined in the config/pod.yaml file. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: true . After you save the changes, Insights Operator no longer sends alerts to the cluster Prometheus instance. 4.6.2.2. Enabling Insights Operator alerts When alerts are disabled, the Insights Operator no longer sends alerts to the cluster Prometheus instance. You can reenable them. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: false . After you save the changes, Insights Operator again sends alerts to the cluster Prometheus instance. 4.6.3. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.6.4. Running an Insights Operator gather operation You can run Insights Operator data gather operations on demand. The following procedures describe how to run the default list of gather operations using the OpenShift web console or CLI. You can customize the on demand gather function to exclude any gather operations you choose. Disabling gather operations from the default list degrades Insights Advisor's ability to offer effective recommendations for your cluster. If you have previously disabled Insights Operator gather operations in your cluster, this procedure will override those parameters. Important The DataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. 4.6.4.1. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6.4.2. Running an Insights Operator gather operation from the web console To collect data, you can run an Insights Operator gather operation by using the OpenShift Container Platform web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure On the console, select Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, in the Search by name field, find the DataGather resource definition, and then click it. On the CustomResourceDefinition details page, click the Instances tab. Click Create DataGather . To create a new DataGather operation, edit the following configuration file and then save your changes. apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Under metadata , replace <your_data_gather> with a unique name for the gather operation. 2 Under gatherers , specify any individual gather operations that you intend to disable. In the example provided, workloads is the only data gather operation that is disabled and all of the other default operations are set to run. When the spec parameter is empty, all of the default gather operations run. Important Do not add a prefix of periodic-gathering- to the name of your gather operation because this string is reserved for other administrative operations and might impact the intended gather operation. Verification On the console, select to Workloads Pods . On the Pods page, go to the Project pull-down menu, and then select Show default projects . Select the openshift-insights project from the Project pull-down menu. Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. 4.6.4.3. Running an Insights Operator gather operation from the OpenShift CLI You can run an Insights Operator gather operation by using the OpenShift Container Platform command line interface. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Enter the following command to run the gather operation: USD oc apply -f <your_datagather_definition>.yaml Replace <your_datagather_definition>.yaml with a configuration file that contains the following parameters: apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Under metadata , replace <your_data_gather> with a unique name for the gather operation. 2 Under gatherers , specify any individual gather operations that you intend to disable. In the example provided, workloads is the only data gather operation that is disabled and all of the other default operations are set to run. When the spec parameter is empty, all of the default gather operations run. Important Do not add a prefix of periodic-gathering- to the name of your gather operation because this string is reserved for other administrative operations and might impact the intended gather operation. Verification Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. Additional resources Insights Operator Gathered Data GitHub repository 4.6.4.4. Disabling the Insights Operator gather operations You can disable the Insights Operator gather operations. Disabling the gather operations gives you the ability to increase privacy for your organization as Insights Operator will no longer gather and send Insights cluster reports to Red Hat. This will disable Insights analysis and recommendations for your cluster without affecting other core functions that require communication with Red Hat such as cluster transfers. You can view a list of attempted gather operations for your cluster from the /insights-operator/gathers.json file in your Insights Operator archive. Be aware that some gather operations only occur when certain conditions are met and might not appear in your most recent archive. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Disable the gather operations by performing one of the following edits to the InsightsDataGather configuration file: To disable all the gather operations, enter all under the disabledGatherers key: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: 1 gatherConfig: disabledGatherers: - all 2 1 The spec parameter specifies gather configurations. 2 The all value disables all gather operations. To disable individual gather operations, enter their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Example individual gather operation Click Save . After you save the changes, the Insights Operator gather configurations are updated and the operations will no longer occur. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.4.5. Enabling the Insights Operator gather operations You can enable the Insights Operator gather operations, if the gather operations have been disabled. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Enable the gather operations by performing one of the following edits: To enable all disabled gather operations, remove the gatherConfig stanza: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: gatherConfig: 1 disabledGatherers: all 1 Remove the gatherConfig stanza to enable all gather operations. To enable individual gather operations, remove their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Remove one or more gather operations. Click Save . After you save the changes, the Insights Operator gather configurations are updated and the affected gather operations start. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.5. Obfuscating Deployment Validation Operator data Cluster administrators can configure the Insight Operator to obfuscate data from the Deployment Validation Operator (DVO), if the Operator is installed. When the workload_names value is added to the insights-config ConfigMap object, workload names-rather than UIDs-are displayed in Insights for Openshift, making them more recognizable for cluster administrators. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console with the "cluster-admin" role. The insights-config ConfigMap object exists in the openshift-insights namespace. The cluster is self managed and the Deployment Validation Operator is installed. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the obfuscation attribute with the workload_names value. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | dataReporting: obfuscation: - workload_names # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml obfuscation attribute is set to - workload_names . 4.7. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.7.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Example output apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights # ... spec: template: # ... spec: containers: - args: # ... image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 # ... 1 Specifies your insights-operator image version. Paste your image version in gather-job.yaml : apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job # ... spec: # ... template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts: 1 Replace any existing value with your insights-operator image version. Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Name: insights-operator-job Namespace: openshift-insights # ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job> where insights-operator-job-<your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.7.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Cluster List menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.7.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.8. Importing simple content access entitlements with Insights Operator Insights Operator periodically imports your simple content access entitlements from OpenShift Cluster Manager and stores them in the etc-pki-entitlement secret in the openshift-config-managed namespace. Simple content access is a capability in Red Hat subscription tools which simplifies the behavior of the entitlement tooling. This feature makes it easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. The Insights Operator imports simple content access entitlements every eight hours, but can be configured or disabled using the insights-config ConfigMap object in the openshift-insights namespace. Note Simple content access must be enabled in Red Hat Subscription Management for the importing to function. Additional resources See About simple content access in the Red Hat Subscription Central documentation, for more information about simple content access. See Using Red Hat subscriptions in builds for more information about using simple content access entitlements in OpenShift Container Platform builds. 4.8.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the simple content access (sca) entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. The entitlement import normally occurs every eight hours, but you can shorten this sca interval if you update your simple content access configuration in the insights-config ConfigMap object. This procedure describes how to update the import interval to two hours (2h). You can specify hours (h) or hours and minutes, for example: 2h30m. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. Set the sca attribute in the file to interval: 2h to import content every two hours. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: interval: 2h # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to interval: 2h . 4.8.2. Disabling simple content access import You can disable the importing of simple content access entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: true . 4.8.3. Enabling a previously disabled simple content access import If the importing of simple content access entitlements is disabled, the Insights Operator does not import simple content access entitlements. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: false . | [
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"oc apply -f <your_datagather_definition>.yaml",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/support/remote-health-monitoring-with-connected-clusters |
Automation controller user guide | Automation controller user guide Red Hat Ansible Automation Platform 2.4 User Guide for Automation Controller Red Hat Customer Content Services | [
"- name: Set the license using a file license: manifest: \"/tmp/my_manifest.zip\"",
"subscription-manager list --available --all | grep \"Ansible Automation Platform\" -B 3 -A 6",
"Subscription Name: Red Hat Ansible Automation Platform, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Single Sign-On Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ******** Pool ID: ******************** Provides Management: No Available: 4999 Suggested: 1",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager list --consumed",
"#subscription-manager remove --pool=<pool_id>",
"managed > manifest_limit => non-compliant managed =< manifest_limit => compliant",
"awx-manage host_metric --csv",
"awx-manage host_metric --tarball",
"awx-manage host_metric --tarball --rows_per_file <n>",
"api/v2/host_metric <n> DELETE",
"\"related_search_fields\": [ \"modified_by__search\", \"project__search\", \"project_update__search\", \"credentials__search\", \"unified_job_template__search\", \"created_by__search\", \"inventory__search\", \"labels__search\", \"schedule__search\", \"webhook_credential__search\", \"job_template__search\", \"job_events__search\", \"dependent_jobs__search\", \"launch_config__search\", \"unifiedjob_ptr__search\", \"notifications__search\", \"unified_job_node__search\", \"instance_group__search\", \"hosts__search\", \"job_host_summaries__search\"",
"AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SECURITY_TOKEN",
"vars: aws: access_key: '{{ lookup(\"env\", \"AWS_ACCESS_KEY_ID\") }}' secret_key: '{{ lookup(\"env\", \"AWS_SECRET_ACCESS_KEY\") }}' security_token: '{{ lookup(\"env\", \"AWS_SECURITY_TOKEN\") }}'",
"GCE_EMAIL GCE_PROJECT GCE_CREDENTIALS_FILE_PATH",
"vars: gce: email: '{{ lookup(\"env\", \"GCE_EMAIL\") }}' project: '{{ lookup(\"env\", \"GCE_PROJECT\") }}' pem_file_path: '{{ lookup(\"env\", \"GCE_PEM_FILE_PATH\") }}'",
"ManagedCredentialType( namespace='insights', . . . injectors={ 'extra_vars': { \"scm_username\": \"{{username}}\", \"scm_password\": \"{{password}}\", }, 'env': { 'INSIGHTS_USER': '{{username}}', 'INSIGHTS_PASSWORD': '{{password}}', },",
"vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}'",
"AZURE_CLIENT_ID AZURE_SECRET AZURE_SUBSCRIPTION_ID AZURE_TENANT AZURE_CLOUD_ENVIRONMENT",
"AZURE_AD_USER AZURE_PASSWORD AZURE_SUBSCRIPTION_ID",
"client_id secret subscription_id tenant azure_cloud_environment",
"ad_user password subscription_id",
"vars: azure: client_id: '{{ lookup(\"env\", \"AZURE_CLIENT_ID\") }}' secret: '{{ lookup(\"env\", \"AZURE_SECRET\") }}' tenant: '{{ lookup(\"env\", \"AZURE_TENANT\") }}' subscription_id: '{{ lookup(\"env\", \"AZURE_SUBSCRIPTION_ID\") }}'",
"ANSIBLE_NET_USERNAME ANSIBLE_NET_PASSWORD",
"vars: network: username: '{{ lookup(\"env\", \"ANSIBLE_NET_USERNAME\") }}' password: '{{ lookup(\"env\", \"ANSIBLE_NET_PASSWORD\") }}'",
"--- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] resources: [\"pods/log\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods/attach\"] verbs: [\"get\", \"list\", \"watch\", \"create\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io",
"apply -f containergroup-sa.yml",
"export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '\"')",
"get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token",
"get secret USDSA_SECRET -o json | jq '.data[\"ca.crt\"]' | xargs | base64 --decode > containergroup-ca.crt",
"ManagedCredentialType( namespace='controller', . . . injectors={ 'env': { 'TOWER_HOST': '{{host}}', 'TOWER_USERNAME': '{{username}}', 'TOWER_PASSWORD': '{{password}}', 'TOWER_VERIFY_SSL': '{{verify_ssl}}', 'TOWER_OAUTH_TOKEN': '{{oauth_token}}', 'CONTROLLER_HOST': '{{host}}', 'CONTROLLER_USERNAME': '{{username}}', 'CONTROLLER_PASSWORD': '{{password}}', 'CONTROLLER_VERIFY_SSL': '{{verify_ssl}}', 'CONTROLLER_OAUTH_TOKEN': '{{oauth_token}}', }",
"vars: controller: host: '{{ lookup(\"env\", \"CONTROLLER_HOST\") }}' username: '{{ lookup(\"env\", \"CONTROLLER_USERNAME\") }}' password: '{{ lookup(\"env\", \"CONTROLLER_PASSWORD\") }}'",
"FOREMAN_INI_PATH",
"OVIRT_URL OVIRT_USERNAME OVIRT_PASSWORD",
"vars: ovirt: ovirt_url: '{{ lookup(\"env\", \"OVIRT_URL\") }}' ovirt_username: '{{ lookup(\"env\", \"OVIRT_USERNAME\") }}' ovirt_password: '{{ lookup(\"env\", \"OVIRT_PASSWORD\") }}'",
"ManagedCredentialType( namespace='rhv', . . . injectors={ # The duplication here is intentional; the ovirt4 inventory plugin # writes a .ini file for authentication, while the ansible modules for # ovirt4 use a separate authentication process that support # environment variables; by injecting both, we support both 'file': { 'template': '\\n'.join( [ '[ovirt]', 'ovirt_url={{host}}', 'ovirt_username={{username}}', 'ovirt_password={{password}}', '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}', ] ) }, 'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'}, }, )",
"VMWARE_HOST VMWARE_USER VMWARE_PASSWORD VMWARE_VALIDATE_CERTS",
"vars: vmware: host: '{{ lookup(\"env\", \"VMWARE_HOST\") }}' username: '{{ lookup(\"env\", \"VMWARE_USER\") }}' password: '{{ lookup(\"env\", \"VMWARE_PASSWORD\") }}'",
"- hosts: all vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}' controller: host: '{{ lookup(\"env\", \"CONTROLLER_HOST\") }}' username: '{{ lookup(\"env\", \"CONTROLLER_USERNAME\") }}' password: '{{ lookup(\"env\", \"CONTROLLER_PASSWORD\") }}' network: username: '{{ lookup(\"env\", \"ANSIBLE_NET_USERNAME\") }}' password: '{{ lookup(\"env\", \"ANSIBLE_NET_PASSWORD\") }}' aws: access_key: '{{ lookup(\"env\", \"AWS_ACCESS_KEY_ID\") }}' secret_key: '{{ lookup(\"env\", \"AWS_SECRET_ACCESS_KEY\") }}' security_token: '{{ lookup(\"env\", \"AWS_SECURITY_TOKEN\") }}' vmware: host: '{{ lookup(\"env\", \"VMWARE_HOST\") }}' username: '{{ lookup(\"env\", \"VMWARE_USER\") }}' password: '{{ lookup(\"env\", \"VMWARE_PASSWORD\") }}' gce: email: '{{ lookup(\"env\", \"GCE_EMAIL\") }}' project: '{{ lookup(\"env\", \"GCE_PROJECT\") }}' azure: client_id: '{{ lookup(\"env\", \"AZURE_CLIENT_ID\") }}' secret: '{{ lookup(\"env\", \"AZURE_SECRET\") }}' tenant: '{{ lookup(\"env\", \"AZURE_TENANT\") }}' subscription_id: '{{ lookup(\"env\", \"AZURE_SUBSCRIPTION_ID\") }}' tasks: - debug: var: machine - debug: var: controller - debug: var: network - debug: var: aws - debug: var: vmware - debug: var: gce - shell: 'cat {{ gce.pem_file_path }}' delegate_to: localhost - debug: var: azure",
"- command: somecommand environment: USERNAME: '{{ lookup(\"env\", \"USERNAME\") }}' PASSWORD: '{{ lookup(\"env\", \"PASSWORD\") }}' delegate_to: somehost",
"/api/v2/organizations/N/galaxy_credentials/",
"curl \"https://controller.example.org/api/v2/credentials/?credential_type__namespace=aws\"",
"fields: - type: string id: username label: Username - type: string id: password label: Password secret: true required: - username - password",
"{ \"fields\": [ { \"type\": \"string\", \"id\": \"username\", \"label\": \"Username\" }, { \"secret\": true, \"type\": \"string\", \"id\": \"password\", \"label\": \"Password\" } ], \"required\": [\"username\", \"password\"] }",
"{ \"fields\": [{ \"id\": \"api_token\", # required - a unique name used to reference the field value \"label\": \"API Token\", # required - a unique label for the field \"help_text\": \"User-facing short text describing the field.\", \"type\": (\"string\" | \"boolean\") # defaults to 'string' \"choices\": [\"A\", \"B\", \"C\"] # (only applicable to `type=string`) \"format\": \"ssh_private_key\" # optional, can be used to enforce data format validity for SSH private key data (only applicable to `type=string`) \"secret\": true, # if true, the field value will be encrypted \"multiline\": false # if true, the field should be rendered as multi-line for input entry # (only applicable to `type=string`) },{ # field 2 },{ # field 3 }], \"required\": [\"api_token\"] # optional; one or more fields can be marked as required },",
"{ \"fields\": [{ \"id\": \"api_token\", # required - a unique name used to reference the field value \"label\": \"API Token\", # required - a unique label for the field \"type\": \"string\", \"choices\": [\"A\", \"B\", \"C\"] }] },",
"{ \"file\": { \"template\": \"[mycloud]\\ntoken={{ api_token }}\" }, \"env\": { \"THIRD_PARTY_CLOUD_API_TOKEN\": \"{{ api_token }}\" }, \"extra_vars\": { \"some_extra_var\": \"{{ username }}:{{ password }}\" } }",
"{ \"file\": { \"template\": \"[mycloud]\\ntoken={{ api_token }}\" }, \"env\": { \"MY_CLOUD_INI_FILE\": \"{{ tower.filename }}\" } }",
"[mycloud]\\ntoken=SOME_TOKEN_VALUE",
"{ \"fields\": [{ \"id\": \"cert\", \"label\": \"Certificate\", \"type\": \"string\" },{ \"id\": \"key\", \"label\": \"Key\", \"type\": \"string\" }] }",
"{ \"file\": { \"template.cert_file\": \"[mycert]\\n{{ cert }}\", \"template.key_file\": \"[mykey]\\n{{ key }}\" }, \"env\": { \"MY_CERT_INI_FILE\": \"{{ tower.filename.cert_file }}\", \"MY_KEY_INI_FILE\": \"{{ tower.filename.key_file }}\" } }",
"--- version: 3 dependencies: galaxy: requirements.yml",
"--- collections: - name: awx.awx",
"ansible-builder build STEP 7: COMMIT my-awx-ee --> 09c930f5f6a 09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2 Complete! The build context can be found at: context",
"run -v /ssh_config:/etc/ssh/ssh_config.d/:O",
"[ \"/var/lib/awx/.ssh:/root/.ssh:O\" ]",
"sudo su",
"mkdir /foo",
"chmod 777 /foo",
"semanage fcontext -a -t container_file_t \"/foo(/.*)?\"",
"restorecon -vvFR /foo",
"version: 3 build_arg_defaults: ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre' dependencies: galaxy: requirements.yml python: - six - psutil system: bindep.txt images: base_image: name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest additional_build_files: - src: files/ansible.cfg dest: configs additional_build_steps: prepend_galaxy: - ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg prepend_final: | RUN whoami RUN cat /etc/os-release append_final: - RUN echo This is a post-install command! - RUN ls -la /etc",
"ansible_core: package_pip: ansible-core ansible_core: package_pip: ansible-core==2.14.3 ansible_core: package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz",
"ansible_runner: package_pip: ansible-runner ansible_runner: package_pip: ansible-runner==2.3.2 ansible_runner: package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz",
"dependencies: python: requirements.txt system: bindep.txt galaxy: requirements.yml ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: \"python310\" python_path: \"/usr/bin/python3.10\"",
"dependencies: python: - pywinrm system: - iputils [platform:rpm] galaxy: collections: - name: community.windows - name: ansible.utils version: 2.10.1 ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: \"python310\" python_path: \"/usr/bin/python3.10\"",
"ansible-builder introspect --sanitize ~/.ansible/collections/",
"options: container_init: package_pip: dumb-init>=1.2.5 entrypoint: '[\"dumb-init\"]' cmd: '[\"csh\"]' package_manager_path: /usr/bin/microdnf relax_password_permissions: false skip_ansible_check: true workdir: /myworkdir user: bob",
"--- container_image: image-name process_isolation_executable: podman # or docker process_isolation: true",
"ansible-galaxy role install -r roles/requirements.yml -p <project-specific cache location>/requirements_roles -vvv",
"AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']",
"ansible-sign project gpg-sign /path/to/project",
"ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9",
"gpg --list-secret-keys",
"gpg --list-keys gpg --export --armour <key fingerprint> > my_public_key.asc",
"dnf install ansible-sign",
"ansible-sign --version",
"ansible-sign 0.1",
"cd sample-project/ tree -a . . ├── inventory └── playbooks └── get_uptime.yml └── hello.yml 1 directory, 3 files",
"include inventory recursive-include playbooks *.yml",
"ansible-sign project gpg-sign .",
"[OK ] GPG signing successful! [NOTE ] Checksum manifest: ./.ansible-sign/sha256sum.txt [NOTE ] GPG summary: signature created",
"tree -a . . ├── .ansible-sign │ ├── sha256sum.txt │ └── sha256sum.txt.sig ├── inventory ├── MANIFEST.in └── playbooks ├── get_uptime.yml └── hello.yml",
"ansible-sign project gpg-verify . [OK ] GPG signature verification succeeded. [OK ] Checksum validation succeeded.",
"/api/v2/hosts?host_filter=ansible_facts__ansible_processor_vcpus=8",
"/api/v2/hosts/?host_filter=name=localhost /api/v2/hosts/?host_filter=ansible_facts__ansible_date_time__weekday_number=\"3\" /api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]=\"GenuineIntel\" /api/v2/hosts/?host_filter=ansible_facts__ansible_lo__ipv6[]__scope=\"host\" /api/v2/hosts/?host_filter=ansible_facts__ansible_processor_vcpus=8 /api/v2/hosts/?host_filter=ansible_facts__ansible_env__PYTHONUNBUFFERED=\"true\" /api/v2/hosts/?host_filter=(name=localhost or name=database) and (groups__name=east or groups__name=\"west coast\") and ansible_facts__an",
"groups.name:groupA",
"ansible_facts.ansible_fips:false",
"host_filter=name=my_host",
"host_filter=ansible_facts__packages__dnsmasq[]__version=\"2.66\"",
"/api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]=\"GenuineIntel\"",
"[account_1234] host1 host2 state=shutdown [account_4321] host3 host4 state=shutdown [account_1234:vars] account_alias=product_dev [account_4321:vars] account_alias=sustaining",
"plugin: constructed strict: true groups: is_shutdown: state | default(\"running\") == \"shutdown\" product_dev: account_alias == \"product_dev\"",
"plugin: constructed strict: true groups: shutdown_in_product_dev: state | default(\"running\") == \"shutdown\" and account_alias == \"product_dev\"",
"source_vars: plugin: constructed strict: true groups: shutdown_in_product_dev: state | default(\"running\") == \"shutdown\" and account_alias == \"product_dev\" compose: resolved_state: state | default(\"running\") is_in_product_dev: account_alias == \"product_dev\" limit: ``",
"all: children: groupA: vars: filter_var: filter_val children: groupB: hosts: host1: {} ungrouped: hosts: host2: {}",
"`source_vars`: plugin: constructed `limit`: `groupA`",
"source_vars: plugin: constructed strict: true groups: filter_var_is_filter_val: filter_var | default(\"\") == \"filter_val\" limit: filter_var_is_filter_val",
"source_vars: plugin: constructed strict: true groups: hosts_using_xterm: ansible_env.TERM == \"xterm\" limit: hosts_using_xterm",
"source_vars: plugin: constructed strict: true groups: intel_hosts: \"GenuineIntel\" in ansible_processor limit: intel_hosts",
"{ ansible_user : <username to ssh into> ansible_ssh_pass : <password for the username> ansible_become_pass: <password for becoming the root> }",
"{ \"status\": { \"power_state\": \"powered_on\", \"created\": \"2020-08-04T18:13:04+00:00\", \"healthy\": true }, \"name\": \"foobar\", \"ip_address\": \"192.168.2.1\" }",
"awx-manage export_custom_scripts --filename=my_scripts.tar Dump of old custom inventory scripts at my_scripts.tar",
"mkdir my_scripts tar -xf my_scripts.tar -C my_scripts",
"ls my_scripts 10 inventory_script_rawhook _19 _30 inventory_script_listenhospital _11 inventory_script_upperorder _1 inventory_script_commercialinternet45 _4 inventory_script_whitestring _12 inventory_script_eastplant _22 inventory_script_pinexchange _5 inventory_script_literaturepossession _13 inventory_script_governmentculture _23 inventory_script_brainluck _6 inventory_script_opportunitytelephone _14 inventory_script_bottomguess _25 inventory_script_buyerleague _7 inventory_script_letjury _15 inventory_script_wallisland _26 inventory_script_lifesport _8 random_inventory_script 16 inventory_script_wallisland _27 inventory_script_exchangesomewhere _9 random_inventory_script _17 inventory_script_bidstory _28 inventory_script_boxchild _18 p _29__inventory_script_wearstress",
"./my_scripts/ 11__inventory_script_upperorder {\"group \\ud801\\udcb0\\uc20e\\u7b0e\\ud81c\\udfeb\\ub12b\\ub4d0\\u9ac6\\ud81e\\udf07\\u6ff9\\uc17b\": {\"hosts\": [\"host_\\ud821\\udcad\\u68b6\\u7a51\\u93b4\\u69cf\\uc3c2\\ud81f\\uddbe\\ud820\\udc92\\u3143\\u62c7\", \"host_\\u6057\\u3985\\u1f60\\ufefb\\u1b22\\ubd2d\\ua90c\\ud81a\\udc69\\u1344\\u9d15\", \"host_\\u78a0\\ud820\\udef3\\u925e\\u69da\\ua549\\ud80c\\ude7e\\ud81e\\udc91\\ud808\\uddd1\\u57d6\\ud801\\ude57\", \"host_\\ud83a\\udc2d\\ud7f7\\ua18a\\u779a\\ud800\\udf8b\\u7903\\ud820\\udead\\u4154\\ud808\\ude15\\u9711\", \"host_\\u18a1\\u9d6f\\u08ac\\u74c2\\u54e2\\u740e\\u5f02\\ud81d\\uddee\\ufbd6\\u4506\"], \"vars\": {\"ansible_host\": \"127.0.0.1\", \"ansible_connection\": \"local\"}}}",
"ansible-inventory -i ./my_scripts/_11__inventory_script_upperorder --list --export",
"compose: ansible_host: public_ip_address ec2_account_id: owner_id ec2_ami_launch_index: ami_launch_index | string ec2_architecture: architecture ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list)) ec2_client_token: client_token ec2_dns_name: public_dns_name ec2_ebs_optimized: ebs_optimized ec2_eventsSet: events | default(\"\") ec2_group_name: placement.group_name ec2_hypervisor: hypervisor ec2_id: instance_id ec2_image_id: image_id ec2_instance_profile: iam_instance_profile | default(\"\") ec2_instance_type: instance_type ec2_ip_address: public_ip_address ec2_kernel: kernel_id | default(\"\") ec2_key_name: key_name ec2_launch_time: launch_time | regex_replace(\" \", \"T\") | regex_replace(\"(\\+)(\\d\\d):(\\d)(\\d)USD\", \".\\g<2>\\g<3>Z\") ec2_monitored: monitoring.state in ['enabled', 'pending'] ec2_monitoring_state: monitoring.state ec2_persistent: persistent | default(false) ec2_placement: placement.availability_zone ec2_platform: platform | default(\"\") ec2_private_dns_name: private_dns_name ec2_private_ip_address: private_ip_address ec2_public_dns_name: public_dns_name ec2_ramdisk: ramdisk_id | default(\"\") ec2_reason: state_transition_reason ec2_region: placement.region ec2_requester_id: requester_id | default(\"\") ec2_root_device_name: root_device_name ec2_root_device_type: root_device_type ec2_security_group_ids: security_groups | map(attribute='group_id') | list | join(',') ec2_security_group_names: security_groups | map(attribute='group_name') | list | join(',') ec2_sourceDestCheck: source_dest_check | default(false) | lower | string ec2_spot_instance_request_id: spot_instance_request_id | default(\"\") ec2_state: state.name ec2_state_code: state.code ec2_state_reason: state_reason.message if state_reason is defined else \"\" ec2_subnet_id: subnet_id | default(\"\") ec2_tag_Name: tags.Name ec2_virtualization_type: virtualization_type ec2_vpc_id: vpc_id | default(\"\") filters: instance-state-name: - running groups: ec2: true hostnames: - network-interface.addresses.association.public-ip - dns-name - private-dns-name keyed_groups: - key: image_id | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: images prefix: '' separator: '' - key: placement.availability_zone parent_group: zones prefix: '' separator: '' - key: ec2_account_id | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: accounts prefix: '' separator: '' - key: ec2_state | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: instance_states prefix: instance_state - key: platform | default(\"undefined\") | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: platforms prefix: platform - key: instance_type | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: types prefix: type - key: key_name | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: keys prefix: key - key: placement.region parent_group: regions prefix: '' separator: '' - key: security_groups | map(attribute=\"group_name\") | map(\"regex_replace\", \"[^A-Za-z0-9\\_]\", \"_\") | list parent_group: security_groups prefix: security_group - key: dict(tags.keys() | map(\"regex_replace\", \"[^A-Za-z0-9\\_]\", \"_\") | list | zip(tags.values() | map(\"regex_replace\", \"[^A-Za-z0-9\\_]\", \"_\") | list)) parent_group: tags prefix: tag - key: tags.keys() | map(\"regex_replace\", \"[^A-Za-z0-9\\_]\", \"_\") | list parent_group: tags prefix: tag - key: vpc_id | regex_replace(\"[^A-Za-z0-9\\_]\", \"_\") parent_group: vpcs prefix: vpc_id - key: placement.availability_zone parent_group: '{{ placement.region }}' prefix: '' separator: '' plugin: amazon.aws.aws_ec2 use_contrib_script_compatible_sanitization: true",
"auth_kind: serviceaccount compose: ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP) gce_description: description if description else None gce_id: id gce_image: image gce_machine_type: machineType gce_metadata: metadata.get(\"items\", []) | items2dict(key_name=\"key\", value_name=\"value\") gce_name: name gce_network: networkInterfaces[0].network.name gce_private_ip: networkInterfaces[0].networkIP gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None) gce_status: status gce_subnetwork: networkInterfaces[0].subnetwork.name gce_tags: tags.get(\"items\", []) gce_zone: zone hostnames: - name - public_ip - private_ip keyed_groups: - key: gce_subnetwork prefix: network - key: gce_private_ip prefix: '' separator: '' - key: gce_public_ip prefix: '' separator: '' - key: machineType prefix: '' separator: '' - key: zone prefix: '' separator: '' - key: gce_tags prefix: tag - key: status | lower prefix: status - key: image prefix: '' separator: '' plugin: google.cloud.gcp_compute retrieve_image_info: true use_contrib_script_compatible_sanitization: true",
"conditional_groups: azure: true default_host_filters: [] fail_on_template_errors: false hostvar_expressions: computer_name: name private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None provisioning_state: provisioning_state | title public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None public_ip_id: public_ip_id if public_ip_id is defined else None public_ip_name: public_ip_name if public_ip_name is defined else None tags: tags if tags else None type: resource_type keyed_groups: - key: location prefix: '' separator: '' - key: tags.keys() | list if tags else [] prefix: '' separator: '' - key: security_group prefix: '' separator: '' - key: resource_group prefix: '' separator: '' - key: os_disk.operating_system_type prefix: '' separator: '' - key: dict(tags.keys() | map(\"regex_replace\", \"^(.*)USD\", \"\\1_\") | list | zip(tags.values() | list)) if tags else [] prefix: '' separator: '' plain_host_names: true plugin: azure.azcollection.azure_rm use_contrib_script_compatible_sanitization: true",
"compose: ansible_host: guest.ipAddress ansible_ssh_host: guest.ipAddress ansible_uuid: 99999999 | random | to_uuid availablefield: availableField configissue: configIssue configstatus: configStatus customvalue: customValue effectiverole: effectiveRole guestheartbeatstatus: guestHeartbeatStatus layoutex: layoutEx overallstatus: overallStatus parentvapp: parentVApp recenttask: recentTask resourcepool: resourcePool rootsnapshot: rootSnapshot triggeredalarmstate: triggeredAlarmState filters: - runtime.powerState == \"poweredOn\" keyed_groups: - key: config.guestId prefix: '' separator: '' - key: '\"templates\" if config.template else \"guests\"' prefix: '' separator: '' plugin: community.vmware.vmware_vm_inventory properties: - availableField - configIssue - configStatus - customValue - datastore - effectiveRole - guestHeartbeatStatus - layout - layoutEx - name - network - overallStatus - parentVApp - permission - recentTask - resourcePool - rootSnapshot - snapshot - triggeredAlarmState - value - capability - config - guest - runtime - storage - summary strict: false with_nested_properties: true",
"group_prefix: foreman_ keyed_groups: - key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '') prefix: foreman_environment_ separator: '' - key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') prefix: foreman_location_ separator: '' - key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') prefix: foreman_organization_ separator: '' - key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') prefix: foreman_lifecycle_environment_ separator: '' - key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') prefix: foreman_content_view_ separator: '' legacy_hostvars: true plugin: theforeman.foreman.foreman validate_certs: false want_facts: true want_hostcollections: false want_params: true",
"expand_hostvars: true fail_on_errors: true inventory_hostname: uuid plugin: openstack.cloud.openstack",
"compose: ansible_host: (devices.values() | list)[0][0] if devices else None keyed_groups: - key: cluster prefix: cluster separator: _ - key: status prefix: status separator: _ - key: tags prefix: tag separator: _ ovirt_hostname_preference: - name - fqdn ovirt_insecure: false plugin: ovirt.ovirt.ovirt",
"include_metadata: true inventory_id: <inventory_id or url_quoted_named_url> plugin: awx.awx.tower validate_certs: <true or false>",
"- hosts: all vars: scan_use_checksum: false scan_use_recursive: false tasks: - scan_packages: - scan_services: - scan_files: paths: '{{ scan_file_paths }}' get_checksum: '{{ scan_use_checksum }}' recursive: '{{ scan_use_recursive }}' when: scan_file_paths is defined",
"Bootstrap Ubuntu (16.04) --- - name: Get Ubuntu 16, and on ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo apt-get -y update raw: sudo apt-get -y install python-simplejson raw: sudo apt-get install python-apt Bootstrap Fedora (23, 24) --- - name: Get Fedora ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo dnf -y update raw: sudo dnf -y install python-simplejson raw: sudo dnf -y install rpm-python",
"scan_foo.py: def main(): module = AnsibleModule( argument_spec = dict()) foo = [ { \"hello\": \"world\" }, { \"foo\": \"bar\" } ] results = dict(ansible_facts=dict(foo=foo)) module.exit_json(**results) main()",
"[ { \"hello\": \"world\" }, { \"foo\": \"bar\" } ]",
"- hosts: all gather_facts: false tasks: - name: Clear gathered facts from all currently targeted hosts meta: clear_facts",
"clouds: devstack: auth: auth_url: http://devstack.yoursite.com:5000/v2.0/ username: admin password: your_password_here project_name: demo",
"- hosts: all gather_facts: false vars: config_file: \"{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}\" nova_tenant_name: demo nova_image_name: \"cirros-0.3.2-x86_64-uec\" nova_instance_name: autobot nova_instance_state: 'present' nova_flavor_name: m1.nano nova_group: group_name: antarctica instance_name: deceptacon instance_count: 3 tasks: - debug: msg=\"{{ config_file }}\" - stat: path=\"{{ config_file }}\" register: st - include_vars: \"{{ config_file }}\" when: st.stat.exists and st.stat.isreg - name: \"Print out clouds variable\" debug: msg=\"{{ clouds|default('No clouds found') }}\" - name: \"Setting nova instance state to: {{ nova_instance_state }}\" local_action: module: nova_compute login_username: \"{{ clouds.devstack.auth.username }}\" login_password: \"{{ clouds.devstack.auth.password }}\"",
"- vsphere_guest: vcenter_hostname: \"{{ lookup('env', 'VMWARE_HOST') }}\" username: \"{{ lookup('env', 'VMWARE_USER') }}\" password: \"{{ lookup('env', 'VMWARE_PASSWORD') }}\" guest: newvm001 from_template: yes template_src: linuxTemplate cluster: MainCluster resource_pool: \"/Resources\" vm_extra_config: folder: MyFolder",
"curl -k -i -H 'Content-Type:application/json' -XPOST -d '{\"host_config_key\": \"redhat\"}' https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/",
"./request_tower_configuration.sh -h Usage: ./request_tower_configuration.sh <options> Request server configuration from Ansible Tower. OPTIONS: -h Show this message -s Controller server (e.g. https://ac.example.com) (required) -k Allow insecure SSL connections and transfers -c Host config key (required) -t Job template ID (required) -e Extra variables",
"'{\"extra_vars\": {\"variable1\":\"value1\",\"variable2\":\"value2\",...}}'",
"root@localhost:~USD curl -f -H 'Content-Type: application/json' -XPOST -d '{\"host_config_key\": \"redhat\", \"extra_vars\": \"{\\\"foo\\\": \\\"bar\\\"}\"}' https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback",
"launch_to_orbit: true satellites: - sputnik - explorer - satcom",
"{ \"launch_to_orbit\": true, \"satellites\": [\"sputnik\", \"explorer\", \"satcom\"] }",
"/api/v2/jobs/?job_slice_count__gt=1",
"/api/v2/workflow_jobs/?job_template__isnull=false",
"/api/v2/job_templates/?job_slice_count__gt=1",
"--- - hosts: localhost tasks: - name: \"Artifact integration test results to the web\" local_action: 'shell curl -F \"file=@integration_results.txt\" https://file.io' register: result - name: \"Artifact URL of test results to Workflows\" set_stats: data: integration_results_url: \"{{ (result.stdout|from_json).link }}\"",
"--- - hosts: localhost tasks: - name: \"Get test results from the web\" uri: url: \"{{ integration_results_url }}\" return_content: true register: results - name: \"Output test results\" debug: msg: \"{{ results.content }}\"",
"(mem - 2048) / mem_per_fork",
"(4096 - 2048) / 100 == ~20",
"cpus * fork_per_cpu",
"4 * 4 == 16",
"16 + (20 - 16) * 0.5 = 18",
"{\"Authentication\": \"988881adc9fc3655077dc2d4d757d480b5ea0e11\", \"MessageType\": \"Test\"}`.",
"job id name url created_by started finished status traceback inventory project playbook credential limit extra_vars hosts http method",
"{\"id\": 38, \"name\": \"Demo Job Template\", \"url\": \"https://host/#/jobs/playbook/38\", \"created_by\": \"bianca\", \"started\": \"2020-07-28T19:57:07.888193+00:00\", \"finished\": null, \"status\": \"running\", \"traceback\": \"\", \"inventory\": \"Demo Inventory\", \"project\": \"Demo Project\", \"playbook\": \"hello_world.yml\", \"credential\": \"Demo Credential\", \"limit\": \"\", \"extra_vars\": \"{}\", \"hosts\": {}}POST / HTTP/1.1",
"job id name url created_by started finished status traceback inventory project playbook credential limit extra_vars hosts",
"{\"id\": 46, \"name\": \"AWX-Collection-tests-awx_job_wait-long_running-XVFBGRSAvUUIrYKn\", \"url\": \"https://host/#/jobs/playbook/46\", \"created_by\": \"bianca\", \"started\": \"2020-07-28T20:43:36.966686+00:00\", \"finished\": \"2020-07-28T20:43:44.936072+00:00\", \"status\": \"failed\", \"traceback\": \"\", \"inventory\": \"Demo Inventory\", \"project\": \"AWX-Collection-tests-awx_job_wait-long_running-JJSlglnwtsRJyQmw\", \"playbook\": \"fail.yml\", \"credential\": null, \"limit\": \"\", \"extra_vars\": \"{\\\"sleep_interval\\\": 300}\", \"hosts\": {\"localhost\": {\"failed\": true, \"changed\": 0, \"dark\": 0, \"failures\": 1, \"ok\": 1, \"processed\": 1, \"skipped\": 0, \"rescued\": 0, \"ignored\": 0}}}",
"{{ job_friendly_name }} #{{ job.id }} had status {{ job.status }}, view details at {{ url }} {{ job_metadata }}",
"{\"id\": 18, \"name\": \"Project - Space Procedures\", \"url\": \"https://host/#/jobs/project/18\", \"created_by\": \"admin\", \"started\": \"2019-10-26T00:20:45.139356+00:00\", \"finished\": \"2019-10-26T00:20:55.769713+00:00\", \"status\": \"successful\", \"traceback\": \"\" }",
"{\"id\": 12, \"name\": \"JobTemplate - Launch Rockets\", \"url\": \"https://host/#/jobs/playbook/12\", \"created_by\": \"admin\", \"started\": \"2019-10-26T00:02:07.943774+00:00\", \"finished\": null, \"status\": \"running\", \"traceback\": \"\", \"inventory\": \"Inventory - Fleet\", \"project\": \"Project - Space Procedures\", \"playbook\": \"launch.yml\", \"credential\": \"Credential - Mission Control\", \"limit\": \"\", \"extra_vars\": \"{}\", \"hosts\": {} }",
"{\"id\": 14, \"name\": \"Workflow Job Template - Launch Mars Mission\", \"url\": \"https://host/#/workflows/14\", \"created_by\": \"admin\", \"started\": \"2019-10-26T00:11:04.554468+00:00\", \"finished\": \"2019-10-26T00:11:24.249899+00:00\", \"status\": \"successful\", \"traceback\": \"\", \"body\": \"Workflow job summary: node #1 spawns job #15, \\\"Assemble Fleet JT\\\", which finished with status successful. node #2 spawns job #16, \\\"Mission Start approval node\\\", which finished with status successful.\\n node #3 spawns job #17, \\\"Deploy Fleet\\\", which finished with status successful.\" }",
"/api/v2/organizations/N/notification_templates_started/ /api/v2/organizations/N/notification_templates_success/ /api/v2/organizations/N/notification_templates_error/",
"{{ job_friendly_name }} {{ job.id }} ran on {{ job.execution_node }} in {{ job.elapsed }} seconds.",
"{'url': 'https://towerhost/USD/jobs/playbook/13', 'traceback': '', 'status': 'running', 'started': '2019-08-07T21:46:38.362630+00:00', 'project': 'Stub project', 'playbook': 'ping.yml', 'name': 'Stub Job Template', 'limit': '', 'inventory': 'Stub Inventory', 'id': 42, 'hosts': {}, 'friendly_name': 'Job', 'finished': False, 'credential': 'Stub credential', 'created_by': 'admin'}",
"AWX_ISOLATION_BASE_PATH = \"/opt/tmp\"",
"AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/automation_controller_user_guide/index |
2.2. Installing NetworkManager | 2.2. Installing NetworkManager NetworkManager is installed by default on Red Hat Enterprise Linux. If it is not, enter as root : For information on user privileges and gaining privileges, see the Red Hat Enterprise Linux System Administrator's Guide . | [
"~]# yum install NetworkManager"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-installing_networkmanager |
Chapter 9. Supplementary Server Variant | Chapter 9. Supplementary Server Variant The following table lists all the packages in the Supplementary Server variant. For more information about support scope, see the Scope of Coverage Details document. Package Core Package? License acroread No Commercial acroread-plugin No Commercial chromium-browser No BSD and LGPLv2+ flash-plugin No Commercial java-1.5.0-ibm No IBM Binary Code License java-1.5.0-ibm-demo No IBM Binary Code License java-1.5.0-ibm-devel No IBM Binary Code License java-1.5.0-ibm-javacomm No IBM Binary Code License java-1.5.0-ibm-jdbc No IBM Binary Code License java-1.5.0-ibm-plugin No IBM Binary Code License java-1.5.0-ibm-src No IBM Binary Code License java-1.6.0-ibm No IBM Binary Code License java-1.6.0-ibm-demo No IBM Binary Code License java-1.6.0-ibm-devel No IBM Binary Code License java-1.6.0-ibm-javacomm No IBM Binary Code License java-1.6.0-ibm-jdbc No IBM Binary Code License java-1.6.0-ibm-plugin No IBM Binary Code License java-1.6.0-ibm-src No IBM Binary Code License java-1.7.1-ibm No IBM Binary Code License java-1.7.1-ibm-demo No IBM Binary Code License java-1.7.1-ibm-devel No IBM Binary Code License java-1.7.1-ibm-jdbc No IBM Binary Code License java-1.7.1-ibm-plugin No IBM Binary Code License java-1.7.1-ibm-src No IBM Binary Code License java-1.8.0-ibm No IBM Binary Code License java-1.8.0-ibm-demo No IBM Binary Code License java-1.8.0-ibm-devel No IBM Binary Code License java-1.8.0-ibm-jdbc No IBM Binary Code License java-1.8.0-ibm-plugin No IBM Binary Code License java-1.8.0-ibm-src No IBM Binary Code License kmod-kspiceusb-rhel60 No GPLv2 libdfp No LGPLv2.1 libdfp-devel No LGPLv2.1 spice-usb-share No Redistributable, no modification permitted system-switch-java No GPLv2+ virtio-win No Red Hat Proprietary and GPLv2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-supplementary-server-variant |
Chapter 3. Post-installation machine configuration tasks | Chapter 3. Post-installation machine configuration tasks There are times when you need to make changes to the operating systems running on OpenShift Container Platform nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way. Aside from a few specialized features, most changes to operating systems on OpenShift Container Platform nodes can be done by creating what are referred to as MachineConfig objects that are managed by the Machine Config Operator. Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OpenShift Container Platform nodes. 3.1. Understanding the Machine Config Operator 3.1.1. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . Project openshift-machine-config-operator 3.1.2. Machine config overview The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a MachineConfig CRD that can write configuration files onto the host (see machine-config-operator ). Understanding what MCO does and how it interacts with other components is critical to making advanced, system-level changes to an OpenShift Container Platform cluster. Here are some things you should know about MCO, machine configs, and how they are used: A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OpenShift Container Platform nodes. MCO applies changes to operating systems in pools of machines. All OpenShift Container Platform clusters start with worker and control plane node pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types. Important A node can have multiple labels applied that indicate its type, such as master or worker , however it can be a member of only a single machine config pool. Some machine configuration must be in place before OpenShift Container Platform is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OpenShift Container Platform installer process, instead of running as a post-installation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OpenShift Container Platform installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning. MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node. Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost. MCO is only supported for writing to files in /etc and /var directories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The /opt and /usr/local directories are examples. Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details. Although Ignition config settings can be delivered directly at OpenShift Container Platform installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them. When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as degraded . It will not overwrite the offending file, however, and should continue to operate in a degraded state. A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OpenShift Container Platform cluster. The machine-api-operator provisions a new machine and MCO configures it. MCO uses Ignition as the configuration format. OpenShift Container Platform 4.6 moved from Ignition config specification version 2 to version 3. 3.1.2.1. What can you change with machine configs? The kinds of components that MCO can change include: config : Create Ignition config objects (see the Ignition configuration specification ) to do things like modify files, systemd services, and other features on OpenShift Container Platform machines, including: Configuration files : Create or overwrite files in the /var or /etc directory. systemd units : Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings. users and groups : Change SSH keys in the passwd section post-installation. Important Changing SSH keys via machine configs is only supported for the core user. kernelArguments : Add arguments to the kernel command line when OpenShift Container Platform nodes boot. kernelType : Optionally identify a non-standard kernel to use instead of the standard kernel. Use realtime to use the RT kernel (for RAN). This is only supported on select platforms. fips : Enable FIPS mode. FIPS should be set at installation-time setting and not a post-installation procedure. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. extensions : Extend RHCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules. Custom resources (for ContainerRuntime and Kubelet ) : Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings ( ContainerRuntime CR) and the Kubelet service ( Kubelet CR). The MCO is not the only Operator that can change operating system components on OpenShift Container Platform nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles. Tasks for the MCO configuration that can be done post-installation are included in the following procedures. See descriptions of RHCOS bare metal installation for system configuration tasks that must be done during or before OpenShift Container Platform installation. 3.1.2.2. Project See the openshift-machine-config-operator GitHub site for details. 3.1.3. Checking machine config pool status To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc commands: Procedure To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m where: UPDATED The True status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in the STATUS field in the oc get mcp output. The False status indicates a node in the MCP is updating. UPDATING The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated. DEGRADED A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready. MACHINECOUNT Indicates the total number of machines in that MCP. READYMACHINECOUNT Indicates the total number of machines in that MCP that are ready for scheduling. UPDATEDMACHINECOUNT Indicates the total number of machines in that MCP that have the current machine config. DEGRADEDMACHINECOUNT Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable. In the output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the UPDATEDMACHINECOUNT being 2 . There are no issues, as indicated by the DEGRADEDMACHINECOUNT being 0 and DEGRADED being False . While the nodes in the MCP are updating, the machine config listed under CONFIG is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to. Note If a node is being cordoned, that node is not included in the READYMACHINECOUNT , but is included in the MACHINECOUNT . Also, the MCP status is set to UPDATING . Because the node has the current machine config, it is counted in the UPDATEDMACHINECOUNT total: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m To check the status of the nodes in an MCP by examining the MachineConfigPool custom resource, run the following command: : USD oc describe mcp worker Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none> Note If a node is being cordoned, the node is not included in the Ready Machine Count . It is included in the Unavailable Machine Count : Example output ... Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3 To see each existing MachineConfig object, run the following command: USD oc get machineconfigs Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m ... rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m Note that the MachineConfig objects listed as rendered are not meant to be changed or deleted. To view the contents of a particular machine config (in this case, 01-master-kubelet ), run the following command: USD oc describe machineconfigs 01-master-kubelet The output from the command shows that this MachineConfig object contains both configuration files ( cloud.conf and kubelet.conf ) and a systemd service (Kubernetes Kubelet): Example output Name: 01-master-kubelet ... Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous... Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube \ kubelet \ --config=/etc/kubernetes/kubelet.conf \ ... If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml to apply a machine config, you could remove that machine config by running the following command: USD oc delete -f ./myconfig.yaml If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state. If you add your own machine configs to your cluster, you can use the commands shown in the example to check their status and the related status of the pool to which they are applied. 3.2. Using MachineConfig objects to configure nodes You can use the tasks in this section to create MachineConfig objects that modify files, systemd unit files, and other operating system features running on OpenShift Container Platform nodes. For more ideas on working with machine configs, see content related to updating SSH authorized keys, verifying image signatures , enabling SCTP , and configuring iSCSI initiatornames for OpenShift Container Platform. OpenShift Container Platform supports Ignition specification version 3.2 . All new machine configs you create going forward should be based on Ignition specification version 3.2. If you are upgrading your OpenShift Container Platform cluster, any existing Ignition specification version 2.x machine configs will be translated automatically to specification version 3.2. Tip Use the following "Configuring chrony time service" procedure as a model for how to go about adding other configuration files to OpenShift Container Platform nodes. 3.2.1. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml Additional resources Creating machine configs with Butane 3.2.2. Disabling the chrony time service You can disable the chrony time service ( chronyd ) for nodes with a specific role by using a MachineConfig custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the MachineConfig CR that disables chronyd for the specified node role. Save the following YAML in the disable-chronyd.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: "chronyd.service" 1 Node role where you want to disable chronyd , for example, master . Create the MachineConfig CR by running the following command: USD oc create -f disable-chronyd.yaml 3.2.3. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 3.2.4. Enabling multipathing with kernel arguments on RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Post-installation support is available by activating multipathing via the machine config. Important Enabling multipathing during installation is supported and recommended for nodes provisioned in OpenShift Container Platform 4.8 or higher. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. For more information about enabling multipathing during installation time, see "Enabling multipathing with kernel arguments on RHCOS" in the Installing on bare metal documentation. Important On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and LinuxONE . Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.7 or later. You are logged in to the cluster as a user with administrative privileges. You have confirmed that the disk is enabled for multipathing. Multipathing is only supported on hosts that are connected to a SAN via an HBA adapter. Procedure To enable multipathing post-installation on control plane nodes: Create a machine config file, such as 99-master-kargs-mpath.yaml , that instructs the cluster to add the master label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing post-installation on worker nodes: Create a machine config file, such as 99-worker-kargs-mpath.yaml , that instructs the cluster to add the worker label and that identifies the multipath kernel argument, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' Create the new machine config by using either the master or worker YAML file you previously created: USD oc create -f ./99-worker-kargs-mpath.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. Additional resources See Enabling multipathing with kernel arguments on RHCOS for more information about enabling multipathing during installation time. 3.2.5. Adding a real-time kernel to nodes Some OpenShift Container Platform workloads require a high degree of determinism.While Linux is not a real-time operating system, the Linux real-time kernel includes a preemptive scheduler that provides the operating system with real-time characteristics. If your OpenShift Container Platform workloads require these real-time characteristics, you can switch your machines to the Linux real-time kernel. For OpenShift Container Platform, 4.9 you can make this switch using a MachineConfig object. Although making the change is as simple as changing a machine config kernelType setting to realtime , there are a few other considerations before making the change: Currently, real-time kernel is supported only on worker nodes, and only for radio access network (RAN) use. The following procedure is fully supported with bare metal installations that use systems that are certified for Red Hat Enterprise Linux for Real Time 8. Real-time support in OpenShift Container Platform is limited to specific subscriptions. The following procedure is also supported for use with Google Cloud Platform. Prerequisites Have a running OpenShift Container Platform cluster (version 4.4 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for the real-time kernel: Create a YAML file (for example, 99-worker-realtime.yaml ) that contains a MachineConfig object for the realtime kernel type. This example tells the cluster to use a real-time kernel for all worker nodes: USD cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-realtime spec: kernelType: realtime EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 99-worker-realtime.yaml Check the real-time kernel: Once each impacted node reboots, log in to the cluster and run the following commands to make sure that the real-time kernel has replaced the regular kernel for the set of nodes you configured: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.22.1 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.22.1 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1 USD oc debug node/ip-10-0-143-147.us-east-2.compute.internal Example output Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux The kernel name contains rt and text "PREEMPT RT" indicates that this is a real-time kernel. To go back to the regular kernel, delete the MachineConfig object: USD oc delete -f 99-worker-realtime.yaml 3.2.6. Configuring journald settings If you need to configure settings for the journald service on OpenShift Container Platform nodes, you can do that by modifying the appropriate configuration file and passing the file to the appropriate pool of nodes as a machine config. This procedure describes how to modify journald rate limiting settings in the /etc/systemd/journald.conf file and apply them to worker nodes. See the journald.conf man page for information on how to use that file. Prerequisites Have a running OpenShift Container Platform cluster. Log in to the cluster as a user with administrative privileges. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the worker nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config to the pool: USD oc apply -f 40-worker-custom-journald.yaml Check that the new machine config is applied and that the nodes are not in a degraded state. It might take a few minutes. The worker pool will show the updates in progress, as each node successfully has the new machine config applied: USD oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m To check that the change was applied, you can log in to a worker node: USD oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD USD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug ... ... sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit Additional resources Creating machine configs with Butane 3.2.7. Adding extensions to RHCOS RHCOS is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to OpenShift Container Platform clusters across all platforms. While adding software packages to RHCOS systems is generally discouraged, the MCO provides an extensions feature you can use to add a minimal set of features to RHCOS nodes. Currently, the following extension is available: usbguard : Adding the usbguard extension protects RHCOS systems from attacks from intrusive USB devices. See USBGuard for details. The following procedure describes how to use a machine config to add one or more extensions to your RHCOS nodes. Prerequisites Have a running OpenShift Container Platform cluster (version 4.6 or later). Log in to the cluster as a user with administrative privileges. Procedure Create a machine config for extensions: Create a YAML file (for example, 80-extensions.yaml ) that contains a MachineConfig extensions object. This example tells the cluster to add the usbguard extension. USD cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF Add the machine config to the cluster. Type the following to add the machine config to the cluster: USD oc create -f 80-extensions.yaml This sets all worker nodes to have rpm packages for usbguard installed. Check that the extensions were applied: USD oc get machineconfig 80-worker-extensions Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s Check that the new machine config is now applied and that the nodes are not in a degraded state. It may take a few minutes. The worker pool will show the updates in progress, as each machine successfully has the new machine config applied: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m Check the extensions. To check that the extension was applied, run: USD oc get node | grep worker Example output NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1 USD oc debug node/ip-10-0-169-2.us-east-2.compute.internal Example output ... To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm 3.2.8. Loading custom firmware blobs in the machine config manifest Because the default location for firmware blobs in /usr/lib is read-only, you can locate a custom firmware blob by updating the search path. This enables you to load local firmware blobs in the machine config manifest when the blobs are not managed by RHCOS. Procedure Create a Butane config file, 98-worker-firmware-blob.bu , that updates the search path so that it is root-owned and writable to local storage. The following example places the custom blob file from your local workstation onto nodes under /var/lib/firmware . Note See "Creating machine configs with Butane" for information about Butane. Butane config file for custom firmware blob variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4 1 Sets the path on the node where the firmware package is copied to. 2 Specifies a file with contents that are read from a local file directory on the system running Butane. The path of the local file is relative to a files-dir directory, which must be specified by using the --files-dir option with Butane in the following step. 3 Sets the permissions for the file on the RHCOS node. It is recommended to set 0644 permissions. 4 The firmware_class.path parameter customizes the kernel search path of where to look for the custom firmware blob that was copied from your local workstation onto the root file system of the node. This example uses /var/lib/firmware as the customized path. Run Butane to generate a MachineConfig object file that uses a copy of the firmware blob on your local workstation named 98-worker-firmware-blob.yaml . The firmware blob contains the configuration to be delivered to the nodes. The following example uses the --files-dir option to specify the directory on your workstation where the local file or files are located: USD butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name> Apply the configurations to the nodes in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f 98-worker-firmware-blob.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. Additional resources Creating machine configs with Butane 3.3. Configuring MCO-related custom resources Besides managing MachineConfig objects, the MCO manages two custom resources (CRs): KubeletConfig and ContainerRuntimeConfig . Those CRs let you change node-level settings impacting how the Kubelet and CRI-O container runtime services behave. 3.3.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=large-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 3.3.2. Creating a ContainerRuntimeConfig CR to edit CRI-O parameters You can change some of the settings associated with the OpenShift Container Platform CRI-O runtime for the nodes associated with a specific machine config pool (MCP). Using a ContainerRuntimeConfig custom resource (CR), you set the configuration values and add a label to match the MCP. The MCO then rebuilds the crio.conf and storage.conf configuration files on the associated nodes with the updated values. Note To revert the changes implemented by using a ContainerRuntimeConfig CR, you must delete the CR. Removing the label from the machine config pool does not revert the changes. You can modify the following settings by using a ContainerRuntimeConfig CR: PIDs limit : The pidsLimit parameter sets the CRI-O pids_limit parameter, which is maximum number of processes allowed in a container. The default is 1024 ( pids_limit = 1024 ). Log level : The logLevel parameter sets the CRI-O log_level parameter, which is the level of verbosity for log messages. The default is info ( log_level = info ). Other options include fatal , panic , error , warn , debug , and trace . Overlay size : The overlaySize parameter sets the CRI-O Overlay storage driver size parameter, which is the maximum size of a container image. Maximum log size : The logSizeMax parameter sets the CRI-O log_size_max parameter, which is the maximum size allowed for the container log file. The default is unlimited ( log_size_max = -1 ). If set to a positive number, it must be at least 8192 to not be smaller than the ConMon read buffer. ConMon is a program that monitors communications between a container manager (such as Podman or CRI-O) and the OCI runtime (such as runc or crun) for a single container. You should have one ContainerRuntimeConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one ContainerRuntimeConfig CR for all the pools. You should edit an existing ContainerRuntimeConfig CR to modify existing settings or add new settings instead of creating a new CR for each change. It is recommended to create a new ContainerRuntimeConfig CR only to modify a different machine config pool, or for changes that are intended to be temporary so that you can revert the changes. You can create multiple ContainerRuntimeConfig CRs, as needed, with a limit of 10 per cluster. For the first ContainerRuntimeConfig CR, the MCO creates a machine config appended with containerruntime . With each subsequent CR, the controller creates a new containerruntime machine config with a numeric suffix. For example, if you have a containerruntime machine config with a -2 suffix, the containerruntime machine config is appended with -3 . If you want to delete the machine configs, you should delete them in reverse order to avoid exceeding the limit. For example, you should delete the containerruntime-3 machine config before deleting the containerruntime-2 machine config. Note If you have a machine config with a containerruntime-9 suffix, and you create another ContainerRuntimeConfig CR, a new machine config is not created, even if there are fewer than 10 containerruntime machine configs. Example showing multiple ContainerRuntimeConfig CRs USD oc get ctrcfg Example output NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s Example showing multiple containerruntime machine configs USD oc get mc | grep container Example output ... 01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m ... 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s ... The following example raises the pids_limit to 2048, sets the log_level to debug , sets the overlay size to 8 GB, and sets the log_size_max to unlimited: Example ContainerRuntimeConfig CR apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: "-1" 5 1 Specifies the machine config pool label. 2 Optional: Specifies the maximum number of processes allowed in a container. 3 Optional: Specifies the level of verbosity for log messages. 4 Optional: Specifies the maximum size of a container image. 5 Optional: Specifies the maximum size allowed for the container log file. If set to a positive number, it must be at least 8192. Procedure To change CRI-O settings using the ContainerRuntimeConfig CR: Create a YAML file for the ContainerRuntimeConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: "-1" 1 Specify a label for the machine config pool that you want you want to modify. 2 Set the parameters as needed. Create the ContainerRuntimeConfig CR: USD oc create -f <file_name>.yaml Verify that the CR is created: USD oc get ContainerRuntimeConfig Example output NAME AGE overlay-size 3m19s Check that a new containerruntime machine config is created: USD oc get machineconfigs | grep containerrun Example output 99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s Monitor the machine config pool until all are shown as ready: USD oc get mcp worker Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h Verify that the settings were applied in CRI-O: Open an oc debug session to a node in the machine config pool and run chroot /host . USD oc debug node/<node_name> sh-4.4# chroot /host Verify the changes in the crio.conf file: sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max' Example output pids_limit = 2048 log_size_max = -1 log_level = "debug" Verify the changes in the `storage.conf`file: sh-4.4# head -n 7 /etc/containers/storage.conf Example output 3.3.3. Setting the default maximum container root partition size for Overlay with CRI-O The root partition of each container shows all of the available disk space of the underlying host. Follow this guidance to set a maximum partition size for the root disk of all containers. To configure the maximum Overlay size, as well as other CRI-O options like the log level and PID limit, you can create the following ContainerRuntimeConfig custom resource definition (CRD): apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G Procedure Create the configuration object: USD oc apply -f overlaysize.yml To apply the new CRI-O configuration to your worker nodes, edit the worker machine config pool: USD oc edit machineconfigpool worker Add the custom-crio label based on the matchLabels name you set in the ContainerRuntimeConfig CRD: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2020-07-09T15:46:34Z" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: "" Save the changes, then view the machine configs: USD oc get machineconfigs New 99-worker-generated-containerruntime and rendered-worker-xyz objects are created: Example output 99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s After those objects are created, monitor the machine config pool for the changes to be applied: USD oc get mcp worker The worker nodes show UPDATING as True , as well as the number of machines, the number updated, and other details: Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h When complete, the worker nodes transition back to UPDATING as False , and the UPDATEDMACHINECOUNT number matches the MACHINECOUNT : Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h Looking at a worker machine, you see that the new 8 GB max size configuration is applied to all of the workers: Example output head -n 7 /etc/containers/storage.conf [storage] driver = "overlay" runroot = "/var/run/containers/storage" graphroot = "/var/lib/containers/storage" [storage.options] additionalimagestores = [] size = "8G" Looking inside a container, you see that the root partition is now 8 GB: Example output ~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% / | [
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m",
"oc describe mcp worker",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3",
"oc get machineconfigs",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m",
"oc describe machineconfigs 01-master-kubelet",
"Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\",
"oc delete -f ./myconfig.yaml",
"variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\"",
"oc create -f disable-chronyd.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"oc create -f ./99-worker-kargs-mpath.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF",
"oc create -f 99-worker-realtime.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.22.1 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.22.1 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1",
"oc debug node/ip-10-0-143-147.us-east-2.compute.internal",
"Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"oc delete -f 99-worker-realtime.yaml",
"variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit",
"cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF",
"oc create -f 80-extensions.yaml",
"oc get machineconfig 80-worker-extensions",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1",
"oc debug node/ip-10-0-169-2.us-east-2.compute.internal",
"To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4",
"butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>",
"oc apply -f 98-worker-firmware-blob.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc get ctrcfg",
"NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s",
"oc get mc | grep container",
"01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: \"-1\" 5",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: \"-1\"",
"oc create -f <file_name>.yaml",
"oc get ContainerRuntimeConfig",
"NAME AGE overlay-size 3m19s",
"oc get machineconfigs | grep containerrun",
"99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'",
"pids_limit = 2048 log_size_max = -1 log_level = \"debug\"",
"sh-4.4# head -n 7 /etc/containers/storage.conf",
"[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G",
"oc apply -f overlaysize.yml",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"",
"oc get machineconfigs",
"99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h",
"head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/post-install-machine-configuration-tasks |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/making-open-source-more-inclusive |
Index | Index A active logs default file location, Configuring Subsystem Logs message categories, Services That Are Logged adding extensions to CRLs, Setting CRL Extensions administrators creating, Creating Users deleting, Deleting a Certificate System User modifying group membership, Changing Members in a Group sudo permissions for, Setting sudo Permissions for Certificate System Services tools provided Certificate System console, Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems agent certificate requesting, Requesting and Receiving a Certificate through the End-Entities Page agents creating, Creating Users deleting, Deleting a Certificate System User enrolling users in person, Certificate Revocation Pages modifying group membership, Changing Members in a Group role defined, Agents See also Agent Services interface, Agents archiving rotated log files, Log File Rotation auditors creating, Creating Users authentication during certificate revocation, User-Initiated Revocation managing through the Console, Setting up PIN-Based Enrollment authentication modules agent initiated user enrollment, Certificate Revocation Pages deleting, Registering Custom Authentication Plug-ins registering new ones, Registering Custom Authentication Plug-ins authorityInfoAccess, authorityInfoAccess authorityKeyIdentifier, Setting Restrictions on CA Certificates , authorityKeyIdentifier , authorityKeyIdentifier B backing up the Certificate System, Backing up and Restoring Certificate System backups, Backing up and Restoring Certificate System base-64 encoded file viewing content, Viewing Certificates and CRLs Published to File basicConstraints, basicConstraints bridge certificates, Using Cross-Pair Certificates buffered logging, Buffered and Unbuffered Logging C CA configuring ECC signing algorithm, Setting the Signing Algorithms for Certificates enabling SCEP enrollments, Enabling SCEP Enrollments SCEP settings, Configuring Security Settings for SCEP CA certificate mapper, LdapCaSimpleMap CA certificate publisher, LdapCaCertPublisher , LdapCertificatePairPublisher CA signing certificate, CA Signing Key Pair and Certificate changing trust settings of, Changing the Trust Settings of a CA Certificate deleting, Deleting Certificates from the Database nickname, CA Signing Key Pair and Certificate requesting, Requesting Certificates through the Console viewing details of, Viewing Database Content through the Console certificate viewing content, Viewing Certificates and CRLs Published to File certificate chains installing in the certificate database, Installing Certificates through the Console why install, About CA Certificate Chains certificate database how to manage, Managing the Certificate Database what it contains, Managing the Certificate Database where it is maintained, Managing the Certificate Database Certificate Manager administrators creating, Creating Users agents creating, Creating Users configuring SMTP settings for notifications, Configuring a Mail Server for Certificate System Notifications key pairs and certificates CA signing certificate, CA Signing Key Pair and Certificate OCSP signing certificate, OCSP Signing Key Pair and Certificate SSL server certificate, SSL Server Key Pair and Certificate subsystem certificate, Subsystem Certificate TLS CA signing certificate, OCSP Signing Key Pair and Certificate manual updates to publishing directory, Updating Certificates and CRLs in a Directory serial number range, Changing the Restrictions for CAs on Issuing Certificates certificate profiles signing algorithms, Setting the Signing Algorithms for Certificates certificate renewal, Configuring Profiles to Enable Renewal certificate revocation authentication during, User-Initiated Revocation reasons for, Reasons for Revoking a Certificate who can revoke certificates, Reasons for Revoking a Certificate Certificate Setup Wizard using to install certificate chains, Installing Certificates through the Console using to install certificates, Installing Certificates through the Console Certificate System backing up, Backing up and Restoring Certificate System restoring, Backing up and Restoring the Instance Directory Certificate System console Configuration tab, Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems managing logs, Viewing Logs in the Console Status tab, Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems Certificate System Console configuring authentication, Setting up Directory-Based Authentication , Setting up PIN-Based Enrollment Certificate System data where it is stored, Configuring the LDAP Database certificateIssuer, certificateIssuer certificatePolicies, certificatePoliciesExt certificates extensions for, Setting Restrictions on CA Certificates , Defaults, Constraints, and Extensions for Certificates and CRLs how to revoke, Reasons for Revoking a Certificate installing, Installing Certificates in the Certificate System Database publishing to files, Publishing to Files publishing to LDAP directory required schema, Configuring the LDAP Directory revocation reasons, Reasons for Revoking a Certificate signing algorithms, Setting the Signing Algorithms for Certificates certutil requesting certificates, Creating Certificate Signing Requests changing group members, Changing Members in a Group trust settings in certificates, Changing the Trust Settings of a CA Certificate why would you change, Changing the Trust Settings of a CA Certificate command-line utilities for adding extensions to Certificate System certificates, Requesting Signing Certificates , Requesting Other Certificates Configuration tab, Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems CRL viewing content, Viewing Certificates and CRLs Published to File CRL Distribution Point extension, CRL Issuing Points CRL extension modules CRLReason, Freshest CRL Extension Default CRL publisher, LdapCrlPublisher CRL signing certificate, About Revoking Certificates requesting, Requesting Certificates through the Console cRLDistributionPoints, CRLDistributionPoints CRLNumber, CRLNumber CRLReason, CRLReason CRLs defined, About Revoking Certificates entering multiple update times, Configuring CRLs for Each Issuing Point entering update period, Configuring CRLs for Each Issuing Point extension-specific modules, About CRL Extensions extensions for, Standard X.509 v3 CRL Extensions Reference issuing or distribution points, CRL Issuing Points publishing of, About Revoking Certificates publishing to files, Publishing to Files publishing to LDAP directory, Publishing CRLs , LDAP Publishing required schema, Configuring the LDAP Directory supported extensions, About Revoking Certificates when automated updates take place, About Revoking Certificates when generated, About Revoking Certificates who generates it, About Revoking Certificates cross-pair certificates, Using Cross-Pair Certificates D deleting authentication modules, Registering Custom Authentication Plug-ins log modules, Managing Log Modules mapper modules, Registering Custom Mapper and Publisher Plug-in Modules privileged users, Deleting a Certificate System User publisher modules, Registering Custom Mapper and Publisher Plug-in Modules deltaCRLIndicator, deltaCRLIndicator DER-encoded file viewing content, Viewing Certificates and CRLs Published to File directory removing expired certificates from, unpublishExpiredCerts (UnpublishExpiredJob) DN components mapper, LdapDNCompsMap downloading certificates, Installing Certificates in the Certificate System Database E ECC configuring, Setting the Signing Algorithms for Certificates requesting, Creating Certificate Signing Requests encrypted file system (EFS), Extended Key Usage Extension Default end-entity certificate publisher, LdapUserCertPublisher end-entity certificates renewal, Configuring Profiles to Enable Renewal enrollment agent initiated, Certificate Revocation Pages Enterprise Security Client, Enterprise Security Client Error log defined, Tomcat Error and Access Logs expired certificates removing from the directory, unpublishExpiredCerts (UnpublishExpiredJob) Extended Key Usage extension OIDs for encrypted file system, Extended Key Usage Extension Default extensions, Setting Restrictions on CA Certificates , Defaults, Constraints, and Extensions for Certificates and CRLs an example, Standard X.509 v3 Certificate Extension Reference authorityInfoAccess, authorityInfoAccess authorityKeyIdentifier, Setting Restrictions on CA Certificates , authorityKeyIdentifier , authorityKeyIdentifier basicConstraints, basicConstraints CA certificates and, Setting Restrictions on CA Certificates certificateIssuer, certificateIssuer certificatePolicies, certificatePoliciesExt cRLDistributionPoints, CRLDistributionPoints CRLNumber, CRLNumber CRLReason, CRLReason deltaCRLIndicator, deltaCRLIndicator extKeyUsage, extKeyUsage invalidityDate, invalidityDate issuerAltName, issuerAltName Extension , issuerAltName issuingDistributionPoint, issuingDistributionPoint keyUsage, keyUsage nameConstraints, nameConstraints netscape-cert-type, netscape-cert-type Netscape-defined, Netscape-Defined Certificate Extensions Reference policyConstraints, policyConstraints policyMappings, policyMappings privateKeyUsagePeriod, privateKeyUsagePeriod subjectAltName, subjectAltName subjectDirectoryAttributes, subjectDirectoryAttributes tool for joining, Requesting Signing Certificates , Requesting Other Certificates tools for generating, Requesting Signing Certificates , Requesting Other Certificates X.509 certificate, summarized, Standard X.509 v3 Certificate Extension Reference X.509 CRL, summarized, Standard X.509 v3 CRL Extensions Reference extKeyUsage, extKeyUsage F Federal Bridge Certificate Authority, Using Cross-Pair Certificates file-based publisher, FileBasedPublisher flush interval for logs, Buffered and Unbuffered Logging G groups changing members, Changing Members in a Group H host name for mail server used for notifications, Configuring a Mail Server for Certificate System Notifications how to revoke certificates, Reasons for Revoking a Certificate I installing certificates, Installing Certificates in the Certificate System Database internal database default hostname, Changing the Internal Database Configuration precaution for changing the hostname, Changing the Internal Database Configuration defined, Configuring the LDAP Database how to distinguish from other Directory Server instances, Restricting Access to the Internal Database name format, Restricting Access to the Internal Database schema, Configuring the LDAP Database what is it used for, Configuring the LDAP Database when installed, Configuring the LDAP Database invalidityDate, invalidityDate IPv6 and SCEP certificates, Generating the SCEP Certificate for a Router issuerAltName, issuerAltName Extension , issuerAltName issuingDistributionPoint, issuingDistributionPoint J job modules registering new ones, Registering a Job Module jobs built-in modules unpublishExpiredCerts, unpublishExpiredCerts (UnpublishExpiredJob) compared to plug-in implementation, About Automated Jobs configuring job notification messages, Customizing CA Notification Messages , Setting up Automated Jobs setting frequency, Setting up the Job Scheduler specifying schedule for, Frequency Settings for Automated Jobs turning on scheduler, Setting up the Job Scheduler K Key Recovery Authority administrators creating, Creating Users agents creating, Creating Users key pairs and certificates list of, Key Recovery Authority Certificates storage key pair, Storage Key Pair subsystem certificate, Subsystem Certificate transport certificate, Transport Key Pair and Certificate keyUsage, keyUsage KRA transport certificate requesting, Requesting Certificates through the Console L LDAP publishing defined, LDAP Publishing manual updates, Updating Certificates and CRLs in a Directory when to do, Manually Updating Certificates in the Directory who can do this, Updating Certificates and CRLs in a Directory location of active log files, Configuring Subsystem Logs log modules deleting, Managing Log Modules registering new ones, Managing Log Modules logging buffered vs. unbuffered, Buffered and Unbuffered Logging log files archiving rotated files, Log File Rotation default location, Configuring Subsystem Logs signing rotated files, Signing Log Files timing of rotation, Log File Rotation log levels, Log Levels (Message Categories) default selection, Log Levels (Message Categories) how they relate to message categories, Log Levels (Message Categories) significance of choosing the right level, Log Levels (Message Categories) managing from Certificate System console, Viewing Logs in the Console services that are logged, Services That Are Logged types of logs, Configuring Subsystem Logs Error, Tomcat Error and Access Logs M mail server used for notifications, Configuring a Mail Server for Certificate System Notifications managing certificate database, Managing the Certificate Database mapper modules deleting, Registering Custom Mapper and Publisher Plug-in Modules registering new ones, Registering Custom Mapper and Publisher Plug-in Modules mappers created during installation, Creating Mappers , LdapCaSimpleMap , LdapSimpleMap mappers that use CA certificate, LdapCaSimpleMap DN components, LdapDNCompsMap modifying privileged user's group membership, Changing Members in a Group N Name extension modules Issuer Alternative Name, Issuer Alternative Name Extension Default nameConstraints, nameConstraints naming convention for internal database instances, Restricting Access to the Internal Database netscape-cert-type, netscape-cert-type nickname for CA signing certificate, CA Signing Key Pair and Certificate for OCSP signing certificate, OCSP Signing Key Pair and Certificate for signing certificate, OCSP Signing Key Pair and Certificate for SSL server certificate, SSL Server Key Pair and Certificate , SSL Server Key Pair and Certificate for subsystem certificate, Subsystem Certificate , Subsystem Certificate , Subsystem Certificate for TLS signing certificate, OCSP Signing Key Pair and Certificate notifications configuring the mail server hostname, Configuring a Mail Server for Certificate System Notifications port, Configuring a Mail Server for Certificate System Notifications to agents about unpublishing certificates, unpublishExpiredCerts (UnpublishExpiredJob) O OCSP publisher, OCSPPublisher OCSP signing certificate, OCSP Signing Key Pair and Certificate nickname, OCSP Signing Key Pair and Certificate requesting, Requesting Certificates through the Console Online Certificate Status Manager administrators creating, Creating Users agents creating, Creating Users key pairs and certificates signing certificate, OCSP Signing Key Pair and Certificate SSL server certificate, SSL Server Key Pair and Certificate subsystem certificate, Subsystem Certificate P PIN Generator tool delivering PINs to users, Setting up PIN-Based Enrollment plug-in modules for CRL extensions CRLReason, Freshest CRL Extension Default for publishing FileBasedPublisher, FileBasedPublisher LdapCaCertPublisher, LdapCaCertPublisher , LdapCertificatePairPublisher LdapCaSimpleMap, LdapCaSimpleMap LdapCrlPublisher, LdapCrlPublisher LdapDNCompsMap, LdapDNCompsMap LdapUserCertPublisher, LdapUserCertPublisher OCSPPublisher, OCSPPublisher for scheduling jobs unpublishExpiredCerts, unpublishExpiredCerts (UnpublishExpiredJob) Issuer Alternative Name, Issuer Alternative Name Extension Default policyConstraints, policyConstraints policyMappings, policyMappings ports for the mail server used for notifications, Configuring a Mail Server for Certificate System Notifications privateKeyUsagePeriod, privateKeyUsagePeriod privileged users deleting, Deleting a Certificate System User modifying privileges group membership, Changing Members in a Group types agents, Agents profiles how profiles work , The Enrollment Profile publisher modules deleting, Registering Custom Mapper and Publisher Plug-in Modules registering new ones, Registering Custom Mapper and Publisher Plug-in Modules publishers created during installation, Configuring LDAP Publishers , LdapCaCertPublisher , LdapUserCertPublisher , LdapCertificatePairPublisher publishers that can publish to CA's entry in the directory, LdapCaCertPublisher , LdapCrlPublisher , LdapCertificatePairPublisher files, FileBasedPublisher OCSP responder, OCSPPublisher users' entries in the directory, LdapUserCertPublisher publishing of certificates to files, Publishing to Files of CRLs, About Revoking Certificates to files, Publishing to Files to LDAP directory, Publishing CRLs , LDAP Publishing queue, Enabling a Publishing Queue (see also publishing queue) viewing content, Viewing Certificates and CRLs Published to File publishing directory defined, LDAP Publishing publishing queue, Enabling a Publishing Queue enabling, Enabling a Publishing Queue R reasons for revoking certificates, Reasons for Revoking a Certificate registering authentication modules, Registering Custom Authentication Plug-ins custom OIDs, Standard X.509 v3 Certificate Extension Reference job modules, Registering a Job Module log modules, Managing Log Modules mapper modules, Registering Custom Mapper and Publisher Plug-in Modules publisher modules, Registering Custom Mapper and Publisher Plug-in Modules requesting certificates agent certificate, Requesting and Receiving a Certificate through the End-Entities Page CA signing certificate, Requesting Certificates through the Console CRL signing certificate, Requesting Certificates through the Console ECC certificates, Creating Certificate Signing Requests KRA transport certificate, Requesting Certificates through the Console OCSP signing certificate, Requesting Certificates through the Console SSL client certificate, Requesting Certificates through the Console SSL server certificate, Requesting Certificates through the Console through the Console, Requesting Certificates through the Console through the end-entities page, Requesting and Receiving a Certificate through the End-Entities Page user certificate, Requesting and Receiving a Certificate through the End-Entities Page using certutil, Creating Certificate Signing Requests restarting subsystem instance, Starting, Stopping, and Restarting a PKI Instance sudo permissions for administrators, Setting sudo Permissions for Certificate System Services without the java security manager, Starting a Subsystem Instance without the Java Security Manager restore, Backing up and Restoring the Instance Directory restoring the Certificate System, Backing up and Restoring the Instance Directory revoking certificates reasons, Reasons for Revoking a Certificate who can revoke certificates, Reasons for Revoking a Certificate roles agent, Agents rotating log files archiving files, Log File Rotation how to set the time, Log File Rotation signing files, Signing Log Files RSA configuring, Setting the Signing Algorithms for Certificates S SCEP enabling, Enabling SCEP Enrollments setting allowed algorithms, Configuring Security Settings for SCEP setting nonce sizes, Configuring Security Settings for SCEP using a separate authentication certificate, Configuring Security Settings for SCEP SCEP certificates and IPv6, Generating the SCEP Certificate for a Router setting CRL extensions, Setting CRL Extensions signing rotated log files, Signing Log Files signing algorithms, Setting the Signing Algorithms for Certificates ECC certificates, Setting the Signing Algorithms for Certificates RSA certificates, Setting the Signing Algorithms for Certificates signing certificate, OCSP Signing Key Pair and Certificate changing trust settings of, Changing the Trust Settings of a CA Certificate deleting, Deleting Certificates from the Database nickname, OCSP Signing Key Pair and Certificate viewing details of, Viewing Database Content through the Console SMTP settings, Configuring a Mail Server for Certificate System Notifications SSL client certificate requesting, Requesting Certificates through the Console SSL server certificate, SSL Server Key Pair and Certificate , SSL Server Key Pair and Certificate changing trust settings of, Changing the Trust Settings of a CA Certificate deleting, Deleting Certificates from the Database nickname, SSL Server Key Pair and Certificate , SSL Server Key Pair and Certificate requesting, Requesting Certificates through the Console viewing details of, Viewing Database Content through the Console starting subsystem instance, Starting, Stopping, and Restarting a PKI Instance sudo permissions for administrators, Setting sudo Permissions for Certificate System Services without the java security manager, Starting a Subsystem Instance without the Java Security Manager Status tab, Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems stoping subsystem instance sudo permissions for administrators, Setting sudo Permissions for Certificate System Services stopping subsystem instance, Starting, Stopping, and Restarting a PKI Instance storage key pair, Storage Key Pair subjectAltName, subjectAltName subjectDirectoryAttributes, subjectDirectoryAttributes subjectKeyIdentifier subjectKeyIdentifier, subjectKeyIdentifier subsystem certificate, Subsystem Certificate , Subsystem Certificate , Subsystem Certificate nickname, Subsystem Certificate , Subsystem Certificate , Subsystem Certificate subsystems for tokens Enterprise Security Client, A Review of Certificate System Subsystems sudo permissions for administrators, Setting sudo Permissions for Certificate System Services T templates for notifications, Customizing CA Notification Messages timing log rotation, Log File Rotation TLS CA signing certificate, OCSP Signing Key Pair and Certificate nickname, OCSP Signing Key Pair and Certificate Token Key Service administrators creating, Creating Users agents creating, Creating Users Token Management System Enterprise Security Client, Enterprise Security Client tokens changing password of, Changing a Token's Password managing, Managing Tokens Used by the Subsystems viewing which tokens are installed, Viewing Tokens TPS setting profiles, Setting Profiles for Users users, Creating and Managing Users for a TPS transport certificate, Transport Key Pair and Certificate changing trust settings of, Changing the Trust Settings of a CA Certificate deleting, Deleting Certificates from the Database viewing details of, Viewing Database Content through the Console trusted managers deleting, Deleting a Certificate System User modifying group membership, Changing Members in a Group U unbuffered logging, Buffered and Unbuffered Logging user certificate requesting, Requesting and Receiving a Certificate through the End-Entities Page users creating, Creating Users W why to revoke certificates, Reasons for Revoking a Certificate | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/ix01 |
probe::signal.do_action | probe::signal.do_action Name probe::signal.do_action - Examining or changing a signal action Synopsis Values sa_mask The new mask of the signal name Name of the probe point sig_name A string representation of the signal oldsigact_addr The address of the old sigaction struct associated with the signal sig The signal to be examined/changed sa_handler The new handler of the signal sigact_addr The address of the new sigaction struct associated with the signal | [
"signal.do_action"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-do-action |
Appendix E. Cephx configuration options | Appendix E. Cephx configuration options The following are Cephx configuration options that can be set up during deployment. auth_cluster_required Description If enabled, the Red Hat Ceph Storage cluster daemons, ceph-mon and ceph-osd , must authenticate with each other. Valid settings are cephx or none . Type String Required No Default cephx . auth_service_required Description If enabled, the Red Hat Ceph Storage cluster daemons require Ceph clients to authenticate with the Red Hat Ceph Storage cluster in order to access Ceph services. Valid settings are cephx or none . Type String Required No Default cephx . auth_client_required Description If enabled, the Ceph client requires the Red Hat Ceph Storage cluster to authenticate with the Ceph client. Valid settings are cephx or none . Type String Required No Default cephx . keyring Description The path to the keyring file. Type String Required No Default /etc/ceph/USDcluster.USDname.keyring,/etc/ceph/USDcluster.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin keyfile Description The path to a key file (that is. a file containing only the key). Type String Required No Default None key Description The key (that is, the text string of the key itself). Not recommended. Type String Required No Default None ceph-mon Location USDmon_data/keyring Capabilities mon 'allow *' ceph-osd Location USDosd_data/keyring Capabilities mon 'allow profile osd' osd 'allow *' radosgw Location USDrgw_data/keyring Capabilities mon 'allow rwx' osd 'allow rwx' cephx_require_signatures Description If set to true , Ceph requires signatures on all message traffic between the Ceph client and the Red Hat Ceph Storage cluster, and between daemons comprising the Red Hat Ceph Storage cluster. Type Boolean Required No Default false cephx_cluster_require_signatures Description If set to true , Ceph requires signatures on all message traffic between Ceph daemons comprising the Red Hat Ceph Storage cluster. Type Boolean Required No Default false cephx_service_require_signatures Description If set to true , Ceph requires signatures on all message traffic between Ceph clients and the Red Hat Ceph Storage cluster. Type Boolean Required No Default false cephx_sign_messages Description If the Ceph version supports message signing, Ceph will sign all messages so they cannot be spoofed. Type Boolean Default true auth_service_ticket_ttl Description When the Red Hat Ceph Storage cluster sends a Ceph client a ticket for authentication, the cluster assigns the ticket a time to live. Type Double Default 60*60 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/cephx-configuration-options_conf |
Chapter 2. Enabling the Argo CD plugin | Chapter 2. Enabling the Argo CD plugin You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps. This plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history. Prerequisites Add Argo CD instance information to your app-config.yaml configmap as shown in the following example: argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} Add the following annotation to the entity's catalog-info.yaml file to identify the Argo CD applications. annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: 'USD{ARGOCD_LABEL_SELECTOR}' (Optional) Add the following annotation to the entity's catalog-info.yaml file to switch between Argo CD instances as shown in the following example: annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: 'USD{ARGOCD_INSTANCE}' Note If you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in app-config.yaml . Procedure Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin. global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-redhat-argocd disabled: false 2.1. Enabling Argo CD Rollouts The optional Argo CD Rollouts feature enhances Kubernetes by providing advanced deployment strategies, such as blue-green and canary deployments, for your applications. When integrated into the backstage Kubernetes plugin, it allows developers and operations teams to visualize and manage Argo CD Rollouts seamlessly within the Backstage interface. Prerequisites The Backstage Kubernetes plugin ( @backstage/plugin-kubernetes ) is installed and configured. To install and configure Kubernetes plugin in Backstage, see Installaltion and Configuration guide. You have access to the Kubernetes cluster with the necessary permissions to create and manage custom resources and ClusterRoles . The Kubernetes cluster has the argoproj.io group resources (for example, Rollouts and AnalysisRuns) installed. Procedure In the app-config.yaml file in your Backstage instance, add the following customResources component under the kubernetes configuration to enable Argo Rollouts and AnalysisRuns: kubernetes: ... customResources: - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'Rollouts' - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'analysisruns' Grant ClusterRole permissions for custom resources. Note If the Backstage Kubernetes plugin is already configured, the ClusterRole permissions for Rollouts and AnalysisRuns might already be granted. Use the prepared manifest to provide read-only ClusterRole access to both the Kubernetes and ArgoCD plugins. If the ClusterRole permission is not granted, use the following YAML manifest to create the ClusterRole : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - argoproj.io resources: - rollouts - analysisruns verbs: - get - list Apply the manifest to the cluster using kubectl : kubectl apply -f <your-clusterrole-file>.yaml Ensure the ServiceAccount accessing the cluster has this ClusterRole assigned. Add annotations to catalog-info.yaml to identify Kubernetes resources for Backstage. For identifying resources by entity ID: annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> (Optional) For identifying resources by namespace: annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE> For using custom label selectors, which override resource identification by entity ID or namespace: annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end' Note Ensure you specify the labels declared in backstage.io/kubernetes-label-selector on your Kubernetes resources. This annotation overrides entity-based or namespace-based identification annotations, such as backstage.io/kubernetes-id and backstage.io/kubernetes-namespace . Add label to Kubernetes resources to enable Backstage to find the appropriate Kubernetes resources. Backstage Kubernetes plugin label: Add this label to map resources to specific Backstage entities. labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> GitOps application mapping: Add this label to map Argo CD Rollouts to a specific GitOps application labels: ... app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME> Note If using the label selector annotation (backstage.io/kubernetes-label-selector), ensure the specified labels are present on the resources. The label selector will override other annotations like kubernetes-id or kubernetes-namespace. Verification Push the updated configuration to your GitOps repository to trigger a rollout. Open Red Hat Developer Hub interface and navigate to the entity you configured. Select the CD tab and then select the GitOps application . The side panel opens. In the Resources table of the side panel, verify that the following resources are displayed: Rollouts AnalysisRuns (optional) Expand a rollout resource and review the following details: The Revisions row displays traffic distribution details for different rollout versions. The Analysis Runs row displays the status of analysis tasks that evaluate rollout success. Additional resources The package path, scope, and name of the Red Hat ArgoCD plugin has changed since 1.2. For more information, see Breaking Changes in the Release notes for Red Hat Developer Hub . For more information on installing dynamic plugins, see Installing and viewing dynamic plugins . | [
"argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD}",
"annotations: # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: 'USD{ARGOCD_LABEL_SELECTOR}'",
"annotations: # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: 'USD{ARGOCD_INSTANCE}'",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-redhat-argocd disabled: false",
"kubernetes: customResources: - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'Rollouts' - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'analysisruns'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - argoproj.io resources: - rollouts - analysisruns verbs: - get - list",
"apply -f <your-clusterrole-file>.yaml",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"labels: app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring_dynamic_plugins/enabling-the-argo-cd-plugin |
9.5. Network Devices | 9.5. Network Devices Red Hat Virtualization is able to expose three different types of network interface controller to guests. The type of network interface controller to expose to a guest is chosen when the guest is created but is changeable from the Red Hat Virtualization Manager. The e1000 network interface controller exposes a virtualized Intel PRO/1000 (e1000) to guests. The virtio network interface controller exposes a para-virtualized network device to guests. The rtl8139 network interface controller exposes a virtualized Realtek Semiconductor Corp RTL8139 to guests. Multiple network interface controllers are permitted per guest. Each controller added takes up an available PCI slot on the guest. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/network_devices |
Red Hat Insights Remediations Guide | Red Hat Insights Remediations Guide Red Hat Insights 1-latest Fixing issues on RHEL systems with remediation playbooks Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide/index |
API overview | API overview OpenShift Container Platform 4.12 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/api_overview/index |
Chapter 11. Logging | Chapter 11. Logging 11.1. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the PN_TRACE_FRM environment variable to 1 : Example: Enabling protocol logging USD export PN_TRACE_FRM=1 USD <your-client-program> To disable protocol logging, unset the PN_TRACE_FRM environment variable. | [
"export PN_TRACE_FRM=1 <your-client-program>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/logging |
B.38.12. RHBA-2011:1283 - kernel bug fix update | B.38.12. RHBA-2011:1283 - kernel bug fix update Updated kernel packages that fix various bugs are now available for Red Hat Enterprise Linux 6. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 731968 Prior to this update, a kernel panic could occur when the Intel 82599 Virtual Function driver was used from the guest. As a result, 10 gigabit Ethernet(10GbE) network interface cards (NICs) could not be used correctly. This update modifies the code so that 10GbE NICs can be used when they are operated from the guest. All users are advised to upgrade to these updated packages, which fix this bug. The system must be rebooted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-1283 |
Appendix B. Advanced Block Storage configuration | Appendix B. Advanced Block Storage configuration Before director-deployed installations, the cinder.conf file configured the Block Storage service and the backup service. When a value from cinder.conf does not have an orchestration (heat) template equivalent, you can use a custom environment file to pass values to the director. Add the values to an ExtraConfig section in the parameter_defaults section of a custom environment file, for example, the cinder-backup-settings.yaml file. B.1. Advanced configuration options With ExtraConfig , you can add additional hiera configuration to the cluster on all nodes. These settings are included on a dedicated backup node. However, if you used ControllerExtraConfig instead of ExtraConfig , your settings are installed on Controller nodes and not on a dedicated backup node. You can substitute DEFAULT/[cinder.conf setting] for the setting from the DEFAULT section of the cinder.conf file. The following example shows how the ExtraConfig entries appear in a YAML file: Table B.1 lists backup-related sample options. Table B.1. Block Storage backup service configuration options Option Type Default value Description backup_service_inithost_offload Optional True Offload pending backup delete during backup service startup. If false, the backup service remains down until all pending backups are deleted. use_multipath_for_image_xfer Optional False Attach volumes using multipath, if available, during backup and restore procedures. This affects all cinder attach operations, such as create volume from image, generic cold migrations, and other operations. num_volume_device_scan_tries Optional 3 The maximum number of times to rescan targets to find volumes during attach. backup_workers Optional 1 Number of backup processes to run. Running multiple concurrent backups or restores with compression results in significant performance gains. backup_native_threads_pool_size Optional 60 Size of the native threads pool for the backups. Most backup drivers rely heavily on this. You can decrease the value for specific drivers that do not rely on this option. backup_share Required Set to HOST :_EXPORT_PATH_. backup_container Optional None (String) Custom directory to use for backups. backup_enable_progress_timer Optional True Enable (true) or disable (false) the timer to send the periodic progress notifications to the Telemetry service (ceilometer) when backing up the volume to the backend storage. backup_mount_options Optional Comma-separated list of options that you can specify when you mount the NFS export that is specified in backup_share. backup_mount_point_base Optional USDstate_path/backup_mount (String) Base directory that contains mount point for NFS share. backup_compression_algorithm Optional zlib The compression algorithm that you want to use when you send backup data to the repository. Valid values are zlib , bz2 , and None . backup_file_size Optional 1999994880 Data from cinder volumes that are larger than this value are stored as multiple files in the backup repository. This option must be a multiple of backup_sha_block_size_bytes. backup_sha_block_size_bytes Optional 32768 Size of cinder volume blocks for digital signature calculation | [
"parameter_defaults: ExtraConfig: cinder::config::cinder_config: DEFAULT/backup_compression_algorithm: value: None"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/block_storage_backup_guide/assembly_advanced-configuration-options |
Appendix A. Reference: Settings in Administration Portal and VM Portal Windows | Appendix A. Reference: Settings in Administration Portal and VM Portal Windows A.1. Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows A.1.1. Virtual Machine General Settings Explained The following table details the options available on the General tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.1. Virtual Machine: General Settings Field Name Description Power cycle required? Cluster The name of the host cluster to which the virtual machine is attached. Virtual machines are hosted on any physical machine in that cluster in accordance with policy rules. Yes. Cross-cluster migration is for emergency use only. Moving clusters requires the virtual machine to be down. Template The template on which the virtual machine is based. This field is set to Blank by default, which allows you to create a virtual machine on which an operating system has not yet been installed. Templates are displayed as Name | Sub-version name (Sub-version number) . Each new version is displayed with a number in brackets that indicates the relative order of the version, with a higher number indicating a more recent version. The version name is displayed as base version if it is the root template of the template version chain. When the virtual machine is stateless, there is an option to select the latest version of the template. This option means that anytime a new version of this template is created, the virtual machine is automatically recreated on restart based on the latest template. Not applicable. This setting is for provisioning new virtual machines only. Operating System The operating system. Valid values include a range of Red Hat Enterprise Linux and Windows variants. Yes. Potentially changes the virtual hardware. Instance Type The instance type on which the virtual machine's hardware configuration can be based. This field is set to Custom by default, which means the virtual machine is not connected to an instance type. The other options available from this drop down menu are Large , Medium , Small , Tiny , XLarge , and any custom instance types that the Administrator has created. Other settings that have a chain link icon to them are pre-filled by the selected instance type. If one of these values is changed, the virtual machine will be detached from the instance type and the chain icon will appear broken. However, if the changed setting is restored to its original value, the virtual machine will be reattached to the instance type and the links in the chain icon will rejoin. NOTE: Support for instance types is now deprecated, and will be removed in a future release. Yes. Optimized for The type of system for which the virtual machine is to be optimized. There are three options: Server , Desktop , and High Performance ; by default, the field is set to Server . Virtual machines optimized to act as servers have no sound card, use a cloned disk image, and are not stateless. Virtual machines optimized to act as desktop machines do have a sound card, use an image (thin allocation), and are stateless. Virtual machines optimized for high performance have a number of configuration changes. See Configuring High Performance Virtual Machines Templates and Pools . Yes. Name The name of the virtual machine. The name must be a unique name within the data center and must not contain any spaces, and must contain at least one character from A-Z or 0-9. The maximum length of a virtual machine name is 255 characters. The name can be reused in different data centers in the environment. Yes. VM ID The virtual machine ID. The virtual machine's creator can set a custom ID for that virtual machine. The custom ID must contain only numbers, in the format, 00000000-0000-0000-0000-00000000 . If no ID is specified during creation a UUID will be automatically assigned. For both custom and automatically-generated IDs, changes are not possible after virtual machine creation. Yes. Description A meaningful description of the new virtual machine. No. Comment A field for adding plain text human-readable comments regarding the virtual machine. No. Affinity Labels Add or remove a selected Affinity Label . No. Stateless Select this check box to run the virtual machine in stateless mode. This mode is used primarily for desktop virtual machines. Running a stateless desktop or server creates a new COW layer on the virtual machine hard disk image where new and changed data is stored. Shutting down the stateless virtual machine deletes the new COW layer which includes all data and configuration changes, and returns the virtual machine to its original state. Stateless virtual machines are useful when creating machines that need to be used for a short time, or by temporary staff. Not applicable. Start in Pause Mode Select this check box to always start the virtual machine in pause mode. This option is suitable for virtual machines which require a long time to establish a SPICE connection; for example, virtual machines in remote locations. Not applicable. Delete Protection Select this check box to make it impossible to delete the virtual machine. It is only possible to delete the virtual machine if this check box is not selected. No. Sealed Select this check box to seal the created virtual machine. This option eliminates machine-specific settings from virtual machines that are provisioned from the template. For more information about the sealing process, see Sealing a Windows Virtual Machine for Deployment as a Template No. Instance Images Click Attach to attach a floating disk to the virtual machine, or click Create to add a new virtual disk. Use the plus and minus buttons to add or remove additional virtual disks. Click Edit to change the configuration of a virtual disk that has already been attached or created. No. Instantiate VM network interfaces by picking a vNIC profile. Add a network interface to the virtual machine by selecting a vNIC profile from the nic1 drop-down list. Use the plus and minus buttons to add or remove additional network interfaces. No. A.1.2. Virtual Machine System Settings Explained CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The following table details the options available on the System tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.2. Virtual Machine: System Settings Field Name Description Power cycle required? Memory Size The amount of memory assigned to the virtual machine. When allocating memory, consider the processing and storage needs of the applications that are intended to run on the virtual machine. If OS supports hotplugging, no. Otherwise, yes. Maximum Memory The maximum amount of memory that can be assigned to the virtual machine. Maximum guest memory is also constrained by the selected guest architecture and the cluster compatibility level. If OS supports hotplugging, no. Otherwise, yes. Total Virtual CPUs The processing power allocated to the virtual machine as CPU Cores. For high performance, do not assign more cores to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Virtual Sockets The number of CPU sockets for the virtual machine. Do not assign more sockets to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Cores per Virtual Socket The number of cores assigned to each virtual socket. If OS supports hotplugging, no. Otherwise, yes. Threads per Core The number of threads assigned to each core. Increasing the value enables simultaneous multi-threading (SMT). IBM POWER8 supports up to 8 threads per core. For x86 and x86_64 (Intel and AMD) CPU types, the recommended value is 1, unless you want to replicate the exact host topology, which you can do using CPU pinning. For more information, see Pinning CPU . If OS supports hotplugging, no. Otherwise, yes. Chipset/Firmware Type Specifies the chipset and firmware type. Defaults to the cluster's default chipset and firmware type. Options are: I440FX Chipset with BIOS Legacy BIOS Q35 Chipset with BIOS BIOS without UEFI (Default for clusters with compatibility version 4.4) Q35 Chipset with UEFI BIOS with UEFI (Default for clusters with compatibility version 4.7) Q35 Chipset with UEFI SecureBoot UEFI with SecureBoot, which authenticates the digital signatures of the boot loader For more information, see UEFI and the Q35 chipset in the Administration Guide . Yes. Custom Emulated Machine This option allows you to specify the machine type. If changed, the virtual machine will only run on hosts that support this machine type. Defaults to the cluster's default machine type. Yes. Custom CPU Type This option allows you to specify a CPU type. If changed, the virtual machine will only run on hosts that support this CPU type. Defaults to the cluster's default CPU type. Yes. Hardware Clock Time Offset This option sets the time zone offset of the guest hardware clock. For Windows, this should correspond to the time zone set in the guest. Most default Linux installations expect the hardware clock to be GMT+00:00. Yes. Custom Compatibility Version The compatibility version determines which features are supported by the cluster, as well as, the values of some properties and the emulated machine type. By default, the virtual machine is configured to run in the same compatibility mode as the cluster as the default is inherited from the cluster. In some situations the default compatibility mode needs to be changed. An example of this is if the cluster has been updated to a later compatibility version but the virtual machines have not been restarted. These virtual machines can be set to use a custom compatibility mode that is older than that of the cluster. See Changing the Cluster Compatibility Version in the Administration Guide for more information. Yes. Serial Number Policy Override the system-level and cluster-level policies for assigning a serial numbers to virtual machines. Apply a policy that is unique to this virtual machine: System Default : Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the DefaultSerialNumberPolicy and DefaultCustomSerialNumber key names. The default value for DefaultSerialNumberPolicy is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. Host ID : Set this virtual machine's serial number to the UUID of the host. Vm ID : Set this virtual machine's serial number to the UUID of this virtual machine. Custom serial number : Set this virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Yes. Custom Serial Number Specify the custom serial number to apply to this virtual machine. Yes. A.1.3. Virtual Machine Initial Run Settings Explained The following table details the options available on the Initial Run tab of the New Virtual Machine and Edit Virtual Machine windows. The settings in this table are only visible if the Use Cloud-Init/Sysprep check box is selected, and certain options are only visible when either a Linux-based or Windows-based option has been selected in the Operating System list in the General tab, as outlined below. Note This table does not include information on whether a power cycle is required because the settings apply to the virtual machine's initial run; the virtual machine is not running when you configure these settings. Table A.3. Virtual Machine: Initial Run Settings Field Name Operating System Description Use Cloud-Init/Sysprep Linux, Windows This check box toggles whether Cloud-Init or Sysprep will be used to initialize the virtual machine. VM Hostname Linux, Windows The host name of the virtual machine. Domain Windows The Active Directory domain to which the virtual machine belongs. Organization Name Windows The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. Active Directory OU Windows The organizational unit in the Active Directory domain to which the virtual machine belongs. Configure Time Zone Linux, Windows The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Admin Password Windows The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password. Admin Password : The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password. Authentication Linux The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password. Password : The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password. SSH Authorized Keys : SSH keys to be added to the authorized keys file of the virtual machine. You can specify multiple SSH keys by entering each SSH key on a new line. Regenerate SSH Keys : Regenerates SSH keys for the virtual machine. Custom Locale Windows Custom locale options for the virtual machine. Locales must be in a format such as en-US . Click the disclosure arrow to display the settings for this option. Input Locale : The locale for user input. UI Language : The language used for user interface elements such as buttons and menus. System Locale : The locale for the overall system. User Locale : The locale for users. Networks Linux Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. DNS Servers : The DNS servers to be used by the virtual machine. DNS Search Domains : The DNS search domains to be used by the virtual machine. Network : Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click + , a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot. Custom Script Linux Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. Sysprep Windows A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Virtualization Manager is installed and alter the fields as required. See Templates for more information. Ignition 2.3.0 Red Hat Enterprise Linux CoreOS When Red Hat Enterprise Linux CoreOS is selected as Operating System, this check box toggles whether Ignition will be used to initialize the virtual machine. A.1.4. Virtual Machine Console Settings Explained The following table details the options available on the Console tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.4. Virtual Machine: Console Settings Field Name Description Power cycle required? Graphical Console Section A group of settings. Yes. Headless Mode Select this check box if you do not a require a graphical console for the virtual machine. When selected, all other fields in the Graphical Console section are disabled. In the VM Portal, the Console icon in the virtual machine's details view is also disabled. Important See Configuring Headless Machines for more details and prerequisites for using headless mode. Yes. Video Type Defines the graphics device. QXL is the default and supports both graphic protocols. VGA supports only the VNC protocol. Yes. Graphics protocol Defines which display protocol to use. SPICE is the default protocol. VNC is an alternative option. To allow both protocols select SPICE + VNC . Yes. VNC Keyboard Layout Defines the keyboard layout for the virtual machine. This option is only available when using the VNC protocol. Yes. USB enabled Defines SPICE USB redirection. This check box is not selected by default. This option is only available for virtual machines using the SPICE protocol: Disabled (check box is cleared) - USB controller devices are added according to the devices.usb.controller value in the osinfo-defaults.properties configuration file. The default for all x86 and x86_64 operating systems is piix3-uhci . For ppc64 systems, the default is nec-xhci . Enabled (check box is selected) - Enables native KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual machines do not require any in-guest agents or drivers for native USB. Yes. Console Disconnect Action Defines what happens when the console is disconnected. This is only relevant with SPICE and VNC console connections. This setting can be changed while the virtual machine is running but will not take effect until a new console connection is established. Select either: No action - No action is taken. Lock screen - This is the default option. For all Linux machines and for Windows desktops this locks the currently active user session. For Windows servers, this locks the desktop and the currently active user. Logout user - For all Linux machines and Windows desktops, this logs out the currently active user session. For Windows servers, the desktop and the currently active user are logged out. Shutdown virtual machine - Initiates a graceful virtual machine shutdown. Reboot virtual machine - Initiates a graceful virtual machine reboot. No. Monitors The number of monitors for the virtual machine. This option is only available for virtual desktops using the SPICE display protocol. You can choose 1 , 2 or 4 . Note that multiple monitors are not supported for Windows systems with WDDMDoD drivers. Yes. Smartcard Enabled Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Virtualization virtual machines. Select or clear the check box to activate and deactivate Smart card authentication for individual virtual machines. Yes. Single Sign On method Enabling Single Sign On allows users to sign into the guest operating system when connecting to a virtual machine from the VM Portal using the Guest Agent. Disable Single Sign On - Select this option if you do not want the Guest Agent to attempt to sign into the virtual machine. Use Guest Agent - Enables Single Sign On to allow the Guest Agent to sign you into the virtual machine. If you select Use Guest Agent, no. Otherwise, yes. Disable strict user checking Click the Advanced Parameters arrow and select the check box to use this option. With this option selected, the virtual machine does not need to be rebooted when a different user connects to it. By default, strict checking is enabled so that only one user can connect to the console of a virtual machine. No other user is able to open a console to the same virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted. Disable strict checking with caution, because you can expose the user's session to the new user. No. Soundcard Enabled A sound card device is not necessary for all virtual machine use cases. If it is for yours, enable a sound card here. Yes. Enable SPICE file transfer Defines whether a user is able to drag and drop files from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Enable SPICE clipboard copy and paste Defines whether a user is able to copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Serial Console Section A group of settings. Enable VirtIO serial console The VirtIO serial console is emulated through VirtIO channels, using SSH and key pairs, and allows you to access a virtual machine's serial console directly from a client machine's command line, instead of opening a console from the Administration Portal or the VM Portal. The serial console requires direct access to the Manager, since the Manager acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. Select the check box to enable the VirtIO console on the virtual machine. Requires a firewall rule. See Opening a Serial Console to a Virtual Machine . Yes. A.1.5. Virtual Machine Host Settings Explained The following table details the options available on the Host tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.5. Virtual Machine: Host Settings Field Name Sub-element Description Power cycle required? Start Running On Defines the preferred host on which the virtual machine is to run. Select either: Any Host in Cluster - The virtual machine can start and run on any available host in the cluster. Specific Host(s) - The virtual machine will start running on a particular host in the cluster. However, the Manager or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host or group of hosts from the list of available hosts. No. The virtual machine can migrate to that host while running. CPU options Pass-Through Host CPU When selected, allows virtual machines to use the host's CPU flags. When selected, Migration Options is set to Allow manual migration only . Yes Migrate only to hosts with the same TSC frequency When selected, this virtual machine can only be migrated to a host with the same TSC frequency. This option is only valid for High Performance virtual machines. Yes Migration Options Migration mode Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy. Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator. Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator. Do not allow migration - The virtual machine cannot be migrated, either automatically or manually. No Migration policy Defines the migration convergence policy. If the check box is left unselected, the host determines the policy. Cluster default (Minimal downtime) - Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime - Allows the virtual machine to migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Post-copy migration - When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running virtual machine. Do not use post-copy migration if the virtual machine availability is critical or if the migration network is unstable. Suspend workload if needed - Allows the virtual machine to migrate in most situations, including when the virtual machine is running a heavy workload. Because of this, virtual machines may experience a more significant downtime than with some other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. No Enable migration encryption Allows the virtual machine to be encrypted during migration. Cluster default Encrypt Don't encrypt No Parallel Migrations Allows you to specify whether and how many parallel migration connections to use. Cluster default : Parallel migration connections are determined by the cluster default. Disabled : The virtual machine is migrated using a single, non-parallel connection. Auto : The number of parallel connections is automatically determined. This settings might automatically disable parallel connections. Auto Parallel : The number of parallel connections is automatically determined. Custom : Allows you to specify the preferred number of parallel connections, the actual number may be lower. Number of VM Migration Connections This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. Configure NUMA NUMA Node Count The number of virtual NUMA nodes available in a host that can be assigned to the virtual machine. No NUMA Pinning Opens the NUMA Topology window. This window shows the host's total CPUs, memory, and NUMA nodes, and the virtual machine's virtual NUMA nodes. You can manually pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left. You can also set Tune Mode for memory allocation: Strict - Memory allocation will fail if the memory cannot be allocated on the target node. Preferred - Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes. Interleave - Memory is allocated across nodes in a round-robin algorithm. If you define NUMA pinning, Migration Options is set to Allow manual migration only . Yes A.1.6. Virtual Machine High Availability Settings Explained The following table details the options available on the High Availability tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.6. Virtual Machine: High Availability Settings Field Name Description Power cycle required? Highly Available Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance, all virtual machines are automatically live migrated to another host. If the host crashes and is in a non-responsive state, only virtual machines with high availability are restarted on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically live migrated to another host. Note that this option is unavailable for virtual machines defined as Server or Desktop if the Migration Options setting in the Hosts tab is set to Do not allow migration . For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary. However, for virtual machines defined as High Performance , you can define high availability regardless of the Migration Options setting. Yes. Target Storage Domain for VM Lease Select the storage domain to hold a virtual machine lease, or select No VM Lease to disable the functionality. When a storage domain is selected, it will hold a virtual machine lease on a special volume that allows the virtual machine to be started on another host if the original host loses power or becomes unresponsive. This functionality is only available on storage domain V4 or later. Note If you define a lease, the only Resume Behavior available is KILL. Yes. Resume Behavior Defines the desired behavior of a virtual machine that is paused due to storage I/O errors, once a connection with the storage is reestablished. You can define the desired resume behavior even if the virtual machine is not highly available. The following options are available: AUTO_RESUME - The virtual machine is automatically resumed, without requiring user intervention. This is recommended for virtual machines that are not highly available and that do not require user intervention after being in the paused state. LEAVE_PAUSED - The virtual machine remains in pause mode until it is manually resumed or restarted. KILL - The virtual machine is automatically resumed if the I/O error is remedied within 80 seconds. However, if more than 80 seconds pass, the virtual machine is ungracefully shut down. This is recommended for highly available virtual machines, to allow the Manager to restart them on another host that is not experiencing the storage I/O error. KILL is the only option available when using virtual machine leases. No. Priority for Run/Migration queue Sets the priority level for the virtual machine to be migrated or restarted on another host. No. Watchdog Allows users to attach a watchdog card to a virtual machine. A watchdog is a timer that is used to automatically detect and recover from failures. Once set, a watchdog timer continually counts down to zero while the system is in operation, and is periodically restarted by the system to prevent it from reaching zero. If the timer reaches zero, it signifies that the system has been unable to reset the timer and is therefore experiencing a failure. Corrective actions are then taken to address the failure. This functionality is especially useful for servers that demand high availability. Watchdog Model : The model of watchdog card to assign to the virtual machine. At current, the only supported model is i6300esb . Watchdog Action : The action to take if the watchdog timer reaches zero. The following actions are available: none - No action is taken. However, the watchdog event is recorded in the audit log. reset - The virtual machine is reset and the Manager is notified of the reset action. poweroff - The virtual machine is immediately shut down. dump - A dump is performed and the virtual machine is paused. The guest's memory is dumped by libvirt, therefore, neither 'kdump' nor 'pvpanic' is required. The dump file is created in the directory that is configured by auto_dump_path in the /etc/libvirt/qemu.conf file on the host. pause - The virtual machine is paused, and can be resumed by users. Yes. A.1.7. Virtual Machine Resource Allocation Settings Explained The following table details the options available on the Resource Allocation tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.7. Virtual Machine: Resource Allocation Settings Field Name Sub-element Description Power cycle required? CPU Allocation CPU Profile The CPU profile assigned to the virtual machine. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined for a cluster, based on quality of service entries created for data centers. No. CPU Shares Allows users to set the level of CPU resources a virtual machine can demand relative to other virtual machines. Low - 512 Medium - 1024 High - 2048 Custom - A custom level of CPU shares defined by the user. No. CPU Pinning Policy None - Runs without any CPU pinning. Manual - Runs a manually specified virtual CPU on a specific physical CPU and a specific host. Available only when the virtual machine is pinned to a Host. Resize and Pin NUMA - Resizes the virtual CPU and NUMA topology of the virtual machine according to the Host, and pins them to the Host resources. Dedicated - Exclusively pins virtual CPUs to host physical CPUs. Available for cluster compatibility level 4.7 or later. If the virtual machine has NUMA enabled, all nodes must be unpinned. Isolate Threads - Exclusively pins virtual CPUs to host physical CPUs. Each virtual CPU gets a physical core. Available for cluster compatibility level 4.7 or later. If the virtual machine has NUMA enabled, all nodes must be unpinned. No. CPU Pinning topology Enables the virtual machine's virtual CPU (vCPU) to run on a specific physical CPU (pCPU) in a specific host. The syntax of CPU pinning is v#p[_v#p] , for example: 0#0 - Pins vCPU 0 to pCPU 0. 0#0_1#3 - Pins vCPU 0 to pCPU 0, and pins vCPU 1 to pCPU 3. 1#1-4,^2 - Pins vCPU 1 to one of the pCPUs in the range of 1 to 4, excluding pCPU 2. The CPU Pinning Topology is populated automatically when Resize and Pin NUMA pinning is selected in CPU Pinning Policy . In order to pin a virtual machine to a host, you must also select the following on the Host tab: Start Running On: Specific Pass-Through Host CPU If CPU pinning is set and you change Start Running On: Specific a CPU pinning topology will be lost window appears when you click OK . When defined, Migration Options in the Hosts tab is set to Allow manual migration only . Yes. Memory Allocation Physical Memory Guaranteed The amount of physical memory guaranteed for this virtual machine. Should be any number between 0 and the defined memory for this virtual machine. If lowered, yes. Otherwise, no. Memory Balloon Device Enabled Enables the memory balloon device for this virtual machine. Enable this setting to allow memory overcommitment in a cluster. Enable this setting for applications that allocate large amounts of memory suddenly but set the guaranteed memory to the same value as the defined memory.Use ballooning for applications and loads that slowly consume memory, occasionally release memory, or stay dormant for long periods of time, such as virtual desktops. See Optimization Settings Explained in the Administration Guide for more information. Yes. Trusted Platform Module TPM Device Enabled Enables the addition of an emulated Trusted Platform Module (TPM) device. Select this check box to add an emulated Trusted Platform Module device to a virtual machine. TPM devices can only be used on x86_64 machines with UEFI firmware and PowerPC machines with pSeries firmware installed. See Adding Trusted Platform Module devices for more information. Yes. IO Threads IO Threads Enabled Enables IO threads. Select this check box to improve the speed of disks that have a VirtIO interface by pinning them to a thread separate from the virtual machine's other functions. Improved disk performance increases a virtual machine's overall performance. Disks with VirtIO interfaces are pinned to an IO thread using a round-robin algorithm. Yes. Queues Multi Queues Enabled Enables multiple queues. This check box is selected by default. It creates up to four queues per vNIC, depending on how many vCPUs are available. It is possible to define a different number of queues per vNIC by creating a custom property as follows: engine-config -s "CustomDeviceProperties={type=interface;prop={ other-nic-properties ;queues=[1-9][0-9]*}}" where other-nic-properties is a semicolon-separated list of pre-existing NIC custom properties. Yes. VirtIO-SCSI Enabled Allows users to enable or disable the use of VirtIO-SCSI on the virtual machines. Not applicable. VirtIO-SCSI Multi Queues Enabled The VirtIO-SCSI Multi Queues Enabled option is only available when VirtIO-SCSI Enabled is selected. Select this check box to enable multiple queues in the VirtIO-SCSI driver. This setting can improve I/O throughput when multiple threads within the virtual machine access the virtual disks. It creates up to four queues per VirtIO-SCSI controller, depending on how many disks are connected to the controller and how many vCPUs are available. Not applicable. Storage Allocation The Storage Allocation option is only available when the virtual machine is created from a template. Not applicable. Thin Provides optimized usage of storage capacity. Disk space is allocated only as it is required. When selected, the format of the disks will be marked as QCOW2 and you will not be able to change it. Not applicable. Clone Optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation. Possible disk formats are QCOW2 or Raw . Not applicable. Disk Allocation The Disk Allocation option is only available when you are creating a virtual machine from a template. Not applicable. Alias An alias for the virtual disk. By default, the alias is set to the same value as that of the template. Not applicable. Virtual Size The total amount of disk space that the virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. Not applicable. Format The format of the virtual disk. The available options are QCOW2 and Raw . When Storage Allocation is Thin , the disk format is QCOW2 . When Storage Allocation is Clone , select QCOW2 or Raw . Not applicable. Target The storage domain on which the virtual disk is stored. By default, the storage domain is set to the same value as that of the template. Not applicable. Disk Profile The disk profile to assign to the virtual disk. Disk profiles are created based on storage profiles defined in the data centers. For more information, see Creating a Disk Profile . Not applicable. A.1.8. Virtual Machine Boot Options Settings Explained The following table details the options available on the Boot Options tab of the New Virtual Machine and Edit Virtual Machine windows Table A.8. Virtual Machine: Boot Options Settings Field Name Description Power cycle required? First Device After installing a new virtual machine, the new virtual machine must go into Boot mode before powering up. Select the first device that the virtual machine must try to boot: Hard Disk CD-ROM Network (PXE) Yes. Second Device Select the second device for the virtual machine to use to boot if the first device is not available. The first device selected in the option does not appear in the options. Yes. Attach CD If you have selected CD-ROM as a boot device, select this check box and select a CD-ROM image from the drop-down menu. The images must be available in the ISO domain. Yes. Enable menu to select boot device Enables a menu to select the boot device. After the virtual machine starts and connects to the console, but before the virtual machine starts booting, a menu displays that allows you to select the boot device. This option should be enabled before the initial boot to allow you to select the required installation media. Yes. A.1.9. Virtual Machine Random Generator Settings Explained The following table details the options available on the Random Generator tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.9. Virtual Machine: Random Generator Settings Field Name Description Power cycle required? Random Generator enabled Selecting this check box enables a paravirtualized Random Number Generator PCI device (virtio-rng). This device allows entropy to be passed from the host to the virtual machine in order to generate a more sophisticated random number. Note that this check box can only be selected if the RNG device exists on the host and is enabled in the host's cluster. Yes. Period duration (ms) Specifies the duration of the RNG's "full cycle" or "full period" in milliseconds. If omitted, the libvirt default of 1000 milliseconds (1 second) is used. If this field is filled, Bytes per period must be filled also. Yes. Bytes per period Specifies how many bytes are permitted to be consumed per period. Yes. Device source: The source of the random number generator. This is automatically selected depending on the source supported by the host's cluster. /dev/urandom source - The Linux-provided random number generator. /dev/hwrng source - An external hardware generator. Yes. A.1.10. Virtual Machine Custom Properties Settings Explained The following table details the options available on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.10. Virtual Machine Custom Properties Settings Field Name Description Recommendations and Limitations Power cycle required? sndbuf Enter the size of the buffer for sending the virtual machine's outgoing data over the socket. Default value is 0. - Yes hugepages Enter the huge page size in KB. Set the huge page size to the largest size supported by the pinned host. The recommended size for x86_64 is 1 GB. The virtual machine's huge page size must be the same size as the pinned host's huge page size. The virtual machine's memory size must fit into the selected size of the pinned host's free huge pages. The NUMA node size must be a multiple of the huge page's selected size. Yes vhost Disables vhost-net, which is the kernel-based virtio network driver on virtual network interface cards attached to the virtual machine. To disable vhost, the format for this property is LogicalNetworkName : false . This will explicitly start the virtual machine without the vhost-net setting on the virtual NIC attached to LogicalNetworkName . vhost-net provides better performance than virtio-net, and if it is present, it is enabled on all virtual machine NICs by default. Disabling this property makes it easier to isolate and diagnose performance issues, or to debug vhost-net errors; for example, if migration fails for virtual machines on which vhost does not exist. Yes sap_agent Enables SAP monitoring on the virtual machine. Set to true or false . - Yes viodiskcache Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching. In order to ensure data integrity in the event of a fault in storage, in the network, or in a host during migration, do not migrate virtual machines with viodiskcache enabled, unless virtual machine clustering or application-level clustering is also enabled. Yes scsi_hostdev Optionally, if you add a SCSI host device to a virtual machine, you can specify the optimal SCSI host device driver. For details, see Adding Host Devices to a Virtual Machine . scsi_generic : (Default) Enables the guest operating system to access OS-supported SCSI host devices attached to the host. Use this driver for SCSI media changers that require raw access, such as tape or CD changers. scsi_block : Similar to scsi_generic but better speed and reliability. Use for SCSI disk devices. If trim or discard for the underlying device is desired, and it's a hard disk, use this driver. scsi_hd : Provides performance with lowered overhead. Supports large numbers of devices. Uses the standard SCSI device naming scheme. Can be used with aio-native. Use this driver for high-performance SSDs. virtio_blk_pci : Provides the highest performance without the SCSI overhead. Supports identifying devices by their serial numbers. If you are not sure, try scsi_hd . Yes Warning Increasing the value of the sndbuf custom property results in increased occurrences of communication failure between hosts and unresponsive virtual machines. A.1.11. Virtual Machine Icon Settings Explained You can add custom icons to virtual machines and templates. Custom icons can help to differentiate virtual machines in the VM Portal. The following table details the options available on the Icon tab of the New Virtual Machine and Edit Virtual Machine windows. Note This table does not include information on whether a power cycle is required because these settings apply to the virtual machine's appearance in the Administration portal , not to its configuration. Table A.11. Virtual Machine: Icon Settings Button Name Description Upload Click this button to select a custom image to use as the virtual machine's icon. The following limitations apply: Supported formats: jpg, png, gif Maximum size: 24 KB Maximum dimensions: 150px width, 120px height Power cycle required? Use default A.1.12. Virtual Machine Foreman/Satellite Settings Explained The following table details the options available on the Foreman/Satellite tab of the New Virtual Machine and Edit Virtual Machine windows Table A.12. Virtual Machine:Foreman/Satellite Settings Field Name Description Power cycle required? Provider If the virtual machine is running Red Hat Enterprise Linux and the system is configured to work with a Satellite server, select the name of the Satellite from the list. This enables you to use Satellite's content management feature to display the relevant Errata for this virtual machine. See Configuring Satellite Errata for more details. Yes. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/appe-reference_settings_in_administration_portal_and_user_portal_windows |
D.3. Selection-Based Action Menus | D.3. Selection-Based Action Menus Selecting specific objects in the Model Explorer provides a context from which the Teiid Designer presents a customized menu of available actions. Selecting a view model, for instance, results in a number of high level options to manage edit model content, perform various operations and provides quick access to other important actions available in Teiid Designer . These may include specialized actions based on model type. Figure D.5. Sample Context Menu | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/selection-based_action_menus |
5.9. Smart Card Authentication | 5.9. Smart Card Authentication Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Virtualization virtual machines. Enabling Smart Cards Ensure that the smart card hardware is plugged into the client machine and is installed according to manufacturer's directions. Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Console tab and select the Smartcard enabled check box. Click OK . Connect to the running virtual machine by clicking the Console button. Smart card authentication is now passed from the client hardware to the virtual machine. Important If the Smart card hardware is not correctly installed, enabling the Smart card feature will result in the virtual machine failing to load properly. Disabling Smart Cards Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Console tab, and clear the Smartcard enabled check box. Click OK . Configuring Client Systems for Smart Card Sharing Smart cards may require certain libraries in order to access their certificates. These libraries must be visible to the NSS library, which spice-gtk uses to provide the smart card to the guest. NSS expects the libraries to provide the PKCS #11 interface. Make sure that the module architecture matches the spice-gtk / remote-viewer architecture. For instance, if you have only the 32b PKCS #11 library available, you must install the 32b build of virt-viewer in order for smart cards to work. Configuring RHEL Clients for Smart Card support Red Hat Enterprise Linux provides support for Smart cards. Install the Smart card support group. If the Smart Card Support group is installed on a Red Hat Enterprise Linux system, smart cards are redirected to the guest when Smart Cards are enabled. To install the Smart card support group, run the following command: # dnf groupinstall "Smart card support" Configuring RHEL Clients with Other Smart Card Middleware Red Hat Enterprise Linux provides a system-wide registry of pkcs11 modules in the p11-kit , and these are accessible to all applications. To register the third party PKCS#11 library in the p11-kit database, run the following command as root: To verify the Smart card is visible for p11-kit through this library run the following command: Configuring Windows Clients Red Hat does not provide PKCS #11 support to Windows clients. Libraries that provide PKCS #11 support must be obtained from third parties. When such libraries are obtained, register them by running the following command as a user with elevated privileges: modutil -dbdir %PROGRAMDATA%\pki\nssdb -add " module name " -libfile C:_\Path\to\module_.dll | [
"dnf groupinstall \"Smart card support\"",
"echo \"module: /path/to/library.so\" > /etc/pkcs11/modules/my.module",
"p11-kit list-modules",
"modutil -dbdir %PROGRAMDATA%\\pki\\nssdb -add \" module name \" -libfile C:_\\Path\\to\\module_.dll"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/enabling_and_disabling_smartcards |
Chapter 4. Configuring Capsule Server with External Services | Chapter 4. Configuring Capsule Server with External Services If you do not want to configure the DNS, DHCP, and TFTP services on Capsule Server, use this section to configure your Capsule Server to work with external DNS, DHCP and TFTP services. 4.1. Configuring Capsule Server with External DNS You can configure Capsule Server with external DNS. Capsule Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Capsule Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Capsule Server with External DHCP To configure Capsule Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an External DHCP Server to Use with Capsule Server" Section 4.2.2, "Configuring Satellite Server with an External DHCP Server" 4.2.1. Configuring an External DHCP Server to Use with Capsule Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Capsule Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) or its utility packages. You must also share the DHCP configuration and lease files with Capsule Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, clients fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and BIND packages or its utility packages depending on your host version. For Red Hat Enterprise Linux 7 host: For Red Hat Enterprise Linux 8 host: Generate a security token: As a result, a key pair that consists of two files is created in the current directory. Copy the secret hash from the key: Edit the dhcpd configuration file for all subnets and add the key. The following is an example: Note that the option routers value is the Satellite or Capsule IP address that you want to use with an external DHCP service. Delete the two key files from the directory that they were created in. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. 4.2.2. Configuring Satellite Server with an External DHCP Server You can configure Capsule Server with an external DHCP server. Prerequisite Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Capsule Server. For more information, see Section 4.2.1, "Configuring an External DHCP Server to Use with Capsule Server" . Procedure Install the nfs-utils utility: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the DHCP service with the appropriate subnets and domain. 4.3. Configuring Capsule Server with External TFTP You can configure Capsule Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.4. Configuring Capsule Server with External IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Capsule Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Capsule Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.4.1, "Configuring Dynamic DNS Update with GSS-TSIG Authentication" Section 4.4.2, "Configuring Dynamic DNS Update with TSIG Authentication" To revert to internal DNS service, use the following procedure: Section 4.4.3, "Reverting to Internal DNS Service" Note You are not required to use Capsule Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Capsule Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see External Authentication for Provisioned Hosts in the Administering Red Hat Satellite guide. 4.4.1. Configuring Dynamic DNS Update with GSS-TSIG Authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Capsule Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos Principal on the IdM Server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Capsule Server to use to authenticate on the IdM server. Installing and Configuring the IdM Client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS Zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that Manages the DNS Service for the Domain Use the satellite-installer command to configure the Satellite or Capsule that manages the DNS Service for the domain: On Satellite, enter the following command: On Capsule, enter the following command: After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Capsule Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.4.2. Configuring Dynamic DNS Update with TSIG Authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling External Updates to the DNS Zone in the IdM Server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing External Updates to the DNS Zone in the IdM Server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.4.3. Reverting to Internal DNS Service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS Server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information,see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"yum install dhcp bind",
"yum install dhcp-server bind-utils",
"dnssec-keygen -a HMAC-MD5 -b 512 -n HOST omapi_key",
"grep ^Key Komapi_key.+*.private | cut -d ' ' -f2",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm HMAC-MD5; secret \"jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw==\"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp && firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl start dhcpd",
"yum install nfs-utils systemctl enable rpcbind nfs-server systemctl start rpcbind nfs-server nfs-lock nfs-idmapd",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp firewall-cmd --runtime-to-permanent",
"firewall-cmd --zone public --add-service mountd && firewall-cmd --zone public --add-service rpc-bind && firewall-cmd --zone public --add-service nfs && firewall-cmd --runtime-to-permanent",
"yum install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash bash-4.2USD cat /mnt/nfs/etc/dhcp/dhcpd.conf bash-4.2USD cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases bash-4.2USD exit",
"satellite-installer --foreman-proxy-dhcp=true --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret=jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw== --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911 --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-server= DHCP_Server_FQDN",
"systemctl restart foreman-proxy",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp=true --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule/047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"satellite-installer --scenario capsule --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-keyfile=/etc/rndc.key --foreman-proxy-dns-ttl=86400",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_capsule_server/configuring-external-services |
probe::tty.resize | probe::tty.resize Name probe::tty.resize - Called when a terminal resize happens Synopsis tty.resize Values new_row the new row value old_row the old row value name the tty name new_col the new col value old_xpixel the old xpixel old_col the old col value new_xpixel the new xpixel value old_ypixel the old ypixel new_ypixel the new ypixel value | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-resize |
Appendix C. Disk Encryption | Appendix C. Disk Encryption C.1. What is Block Device Encryption? Block device encryption protects the data on a block device by encrypting it. To access the device's decrypted contents, a user must provide a passphrase or key as authentication. This provides additional security beyond existing OS security mechanisms in that it protects the device's contents even if it has been physically removed from the system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Disk_Encryption_Guide |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_ansible_automation_platform/providing-feedback |
Chapter 5. Configuring Capsule Servers with default SSL certificates for load balancing (with Puppet) | Chapter 5. Configuring Capsule Servers with default SSL certificates for load balancing (with Puppet) If you use Puppet in your Satellite setup, you can configure one or more Capsule Servers that use default SSL certificates for load balancing. To do this, you configure Puppet certificate signing on one of your Capsule Servers. Then, you configure each remaining Puppet Capsule used for load balancing to use the certificates. The first Capsule Server will generate and sign Puppet certificates for the remaining Capsules configured for load balancing. 5.1. Prerequisites Prepare a new Capsule Server to use for load balancing. See Chapter 2, Preparing Capsule Servers for load balancing . Review Section 1.2, "Services and features supported in a load-balanced setup" . 5.2. Configuring Capsule Server with default SSL certificates to generate and sign Puppet certificates On the Capsule Server that will generate Puppet certificates for all other load-balancing Capsule Servers, configure Puppet certificate generation and signing. Procedure On Satellite Server, generate Katello certificates for the system where you configure Capsule Server to generate and sign Puppet certificates: Retain a copy of the example satellite-installer command that is output by the capsule-certs-generate command for installing Capsule Server certificate. Copy the certificate archive file from Satellite Server to Capsule Server: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command: On Capsule Server that is the Puppetserver Certificate Authority, stop the Puppet server: Generate Puppet certificates for all other Capsule Servers that you configure for load balancing, except the system where you first configured Puppet certificate signing: This command creates the following files: /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppetserver/ca/signed/ capsule.example.com .pem Start the Puppet server: 5.3. Configuring remaining Capsule Servers with default SSL certificates for load balancing On each load-balancing Capsule Server, excluding the Capsule Server configured to sign Puppet certificates, configure the system to use Puppet certificates. Procedure On Satellite Server, generate Katello certificates for Capsule Server: Retain a copy of the example satellite-installer command that is output by the capsule-certs-generate command for installing Capsule Server certificate. Copy the certificate archive file from Satellite Server to Capsule Server: On Capsule Server, install the puppetserver package: On Capsule Server, create directories for puppet certificates: On Capsule Server, copy the Puppet certificates for this Capsule Server from the system where you configure Capsule Server to sign Puppet certificates: On Capsule Server, change the /etc/puppetlabs/puppet/ssl/ directory ownership to user puppet and group puppet : On Capsule Server, set the SELinux context for the /etc/puppetlabs/puppet/ssl/ directory: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command: 5.4. Managing Puppet limitations with load balancing in Satellite If you use Puppet, Puppet certificate signing is assigned to the first Capsule that you configure. If the first Capsule is down, hosts cannot obtain Puppet content. Puppet Certificate Authority (CA) management does not support certificate signing in a load-balanced setup. Puppet CA stores certificate information, such as the serial number counter and CRL, on the file system. Multiple writer processes that attempt to use the same data can corrupt it. To manage this Puppet limitation, complete the following steps: Configure Puppet certificate signing on one Capsule Server, typically the first system where you configure Capsule Server for load balancing. Configure the clients to send CA requests to port 8141 on a load balancer. Configure a load balancer to redirect CA requests from port 8141 to port 8140 on the system where you configure Capsule Server to sign Puppet certificates. To troubleshoot issues, reproduce the issue on each Capsule, bypassing the load balancer. This solution does not use Pacemaker or other similar HA tools to maintain one state across all Capsules. | [
"capsule-certs-generate --certs-tar \"/root/ capsule-ca.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule-ca.example.com",
"scp /root/ capsule-ca.example.com -certs.tar root@ capsule-ca.example.com : capsule-ca.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"true\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"true\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule-ca.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --enable-puppet --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"true\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server true --puppet-server-ca \"true\"",
"systemctl stop puppetserver",
"puppetserver ca generate --ca-client --certname capsule.example.com --subject-alt-names loadbalancer.example.com",
"systemctl start puppetserver",
"capsule-certs-generate --certs-tar \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com --foreman-proxy-fqdn capsule.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"satellite-maintain packages install puppetserver",
"mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/",
"scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem",
"chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/",
"restorecon -Rv /etc/puppetlabs/puppet/ssl/",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-puppetca \"false\" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\"",
"satellite-installer --scenario capsule --certs-cname \" loadbalancer.example.com \" --certs-tar-file \" capsule.example.com-certs.tar \" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --foreman-proxy-puppetca \"false\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-server-ca \"false\""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/configuring-capsule-servers-with-default-ssl-certificates-for-load-balancing-with-puppet_load-balancing |
Chapter 153. XSLT Saxon | Chapter 153. XSLT Saxon Since Camel 3.0 Only producer is supported The XSLT Saxon component allows you to process a message using an XSLT template using Saxon. This is ideal when using Templating to generate responses for requests. 153.1. Dependencies When using xslt-saxon with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-saxon-starter</artifactId> </dependency> 153.2. URI format The URI format contains templateName , which can be one of the following: the classpath-local URI of the template to invoke the complete URL of the remote template. You can append query options to the URI in the following format: Table 153.1. Example URIs URI Description xslt-saxon:com/acme/mytransform.xsl Refers to the file com/acme/mytransform.xsl on the classpath xslt-saxon:file:///foo/bar.xsl Refers to the file /foo/bar.xsl xslt-saxon:http://acme.com/cheese/foo.xsl Refers to the remote http resource 153.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 153.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 153.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 153.4. Component Options The XSLT Saxon component supports 11 options, which are listed below. Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonConfigurationProperties (advanced) To set custom Saxon configuration properties. Map saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory 153.5. Endpoint Options The XSLT Saxon endpoint is configured using URI syntax: with the following path and query parameters: 153.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 153.5.2. Query Parameters (18 parameters) Name Description Default Type allowStAX (producer) Whether to allow using StAX as the javax.xml.transform.Source. You can enable this if the XSLT library supports StAX such as the Saxon library (camel-saxon). The Xalan library (default in JVM) does not support StAXSource. true boolean contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key XsltConstants.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. Enum values: string bytes DOM file string XsltOutput transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can comma to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactory (advanced) To use a custom XSLT transformer factory. TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom javax.xml.transform.URIResolver. URIResolver xsltMessageLogger (advanced) A consumer to messages generated during XSLT transformations. XsltMessageLogger 153.6. Using XSLT endpoints The following format is an example of using an XSLT template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header). from("activemq:My.Queue"). to("xslt-saxon:com/acme/mytransform.xsl"); If you want to use InOnly and consume the message and send it to another destination you could use the following route: from("activemq:My.Queue"). to("xslt-saxon:com/acme/mytransform.xsl"). to("activemq:Another.Queue"); 153.7. Getting Useable Parameters into the XSLT By default, all headers are added as parameters which are then available in the XSLT. To make the parameters useable, you will need to declare them. <setHeader name="myParam"><constant>42</constant></setHeader> <to uri="xslt:MyTransform.xsl"/> The parameter also needs to be declared in the top level of the XSLT for it to be available: <xsl: ...... > <xsl:param name="myParam"/> <xsl:template ...> 153.8. Spring XML versions To use the above examples in Spring XML, use something like the following code: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="xslt-saxon:org/apache/camel/spring/processor/example.xsl"/> <to uri="activemq:Another.Queue"/> </route> </camelContext> 153.9. Using xsl:include Camel provides its own implementation of URIResolver . This allows Camel to load included files from the classpath. For example the include file in the following code will be located relative to the starting endpoint. <xsl:include href="staff_template.xsl"/> This means that Camel will locate the file in the classpath as org/apache/camel/component/xslt/staff_template.xsl . You can use classpath: or file: to instruct Camel to look either in the classpath or file system. If you omit the prefix then Camel uses the prefix from the endpoint configuration. If no prefix is specified in the endpoint configuration, the default is classpath: . You can also refer backwards in the include paths. In the following example, the xsl file will be resolved under org/apache/camel/component . <xsl:include href="../staff_other_template.xsl"/> 153.10. Using xsl:include and default prefix Camel uses the prefix from the endpoint configuration as the default prefix. You can explicitly specify file: or classpath: loading. The two loading types can be mixed in a XSLT script, if necessary. 153.11. Using Saxon extension functions Since Saxon 9.2, writing extension functions has been supplemented by a new mechanism, referred to as integrated extension functions . You can now easily use camel as shown in the below example: SimpleRegistry registry = new SimpleRegistry(); registry.put("function1", new MyExtensionFunction1()); registry.put("function2", new MyExtensionFunction2()); CamelContext context = new DefaultCamelContext(registry); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start") .to("xslt-saxon:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2"); } }); With Spring XML: <bean id="function1" class="org.apache.camel.component.xslt.extensions.MyExtensionFunction1"/> <bean id="function2" class="org.apache.camel.component.xslt.extensions.MyExtensionFunction2"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:extensions"/> <to uri="xslt-saxon:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2"/> </route> </camelContext> 153.12. Dynamic stylesheets To provide a dynamic stylesheet at runtime you can define a dynamic URI. See How to use a dynamic URI in to() for more information. 153.13. Accessing warnings, errors and fatalErrors from XSLT ErrorListener Any warning/error or fatalError is stored on the current Exchange as a property with the keys Exchange.XSLT_ERROR , Exchange.XSLT_FATAL_ERROR , or Exchange.XSLT_WARNING which allows end users to get hold of any errors happening during transformation. For example, in the stylesheet below, we want to determinate whether a staff has an empty dob field. And to include a custom error message using xsl:message. <xsl:template match="/"> <html> <body> <xsl:for-each select="staff/programmer"> <p>Name: <xsl:value-of select="name"/><br /> <xsl:if test="dob=''"> <xsl:message terminate="yes">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template> The exception is stored on the Exchange as a warning with the key Exchange.XSLT_WARNING. 153.14. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.xslt-saxon.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xslt-saxon.content-cache Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true Boolean camel.component.xslt-saxon.enabled Whether to enable auto configuration of the xslt-saxon component. This is enabled by default. Boolean camel.component.xslt-saxon.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xslt-saxon.saxon-configuration To use a custom Saxon configuration. The option is a net.sf.saxon.Configuration type. Configuration camel.component.xslt-saxon.saxon-configuration-properties To set custom Saxon configuration properties. Map camel.component.xslt-saxon.saxon-extension-functions Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String camel.component.xslt-saxon.secure-processing Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true Boolean camel.component.xslt-saxon.transformer-factory-class To use a custom XSLT transformer factory, specified as a FQN class name. String camel.component.xslt-saxon.transformer-factory-configuration-strategy A configuration strategy to apply on freshly created instances of TransformerFactory. The option is a org.apache.camel.component.xslt.TransformerFactoryConfigurationStrategy type. TransformerFactoryConfigurationStrategy camel.component.xslt-saxon.uri-resolver To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. The option is a javax.xml.transform.URIResolver type. URIResolver camel.component.xslt-saxon.uri-resolver-factory To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. The option is a org.apache.camel.component.xslt.XsltUriResolverFactory type. XsltUriResolverFactory | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-saxon-starter</artifactId> </dependency>",
"xslt-saxon:templateName[?options]",
"?option=value&option=value&...",
"xslt-saxon:resourceUri",
"from(\"activemq:My.Queue\"). to(\"xslt-saxon:com/acme/mytransform.xsl\");",
"from(\"activemq:My.Queue\"). to(\"xslt-saxon:com/acme/mytransform.xsl\"). to(\"activemq:Another.Queue\");",
"<setHeader name=\"myParam\"><constant>42</constant></setHeader> <to uri=\"xslt:MyTransform.xsl\"/>",
"<xsl: ...... > <xsl:param name=\"myParam\"/> <xsl:template ...>",
"<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"xslt-saxon:org/apache/camel/spring/processor/example.xsl\"/> <to uri=\"activemq:Another.Queue\"/> </route> </camelContext>",
"<xsl:include href=\"staff_template.xsl\"/>",
"<xsl:include href=\"../staff_other_template.xsl\"/>",
"SimpleRegistry registry = new SimpleRegistry(); registry.put(\"function1\", new MyExtensionFunction1()); registry.put(\"function2\", new MyExtensionFunction2()); CamelContext context = new DefaultCamelContext(registry); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\") .to(\"xslt-saxon:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2\"); } });",
"<bean id=\"function1\" class=\"org.apache.camel.component.xslt.extensions.MyExtensionFunction1\"/> <bean id=\"function2\" class=\"org.apache.camel.component.xslt.extensions.MyExtensionFunction2\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:extensions\"/> <to uri=\"xslt-saxon:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2\"/> </route> </camelContext>",
"<xsl:template match=\"/\"> <html> <body> <xsl:for-each select=\"staff/programmer\"> <p>Name: <xsl:value-of select=\"name\"/><br /> <xsl:if test=\"dob=''\"> <xsl:message terminate=\"yes\">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xslt-saxon-component-starter |
Getting started with Ansible Automation Platform | Getting started with Ansible Automation Platform Red Hat Ansible Automation Platform 2.5 Get started with Ansible Automation Platform Red Hat Customer Content Services | [
"/api/gateway/v1/activitystream/",
"You have already reached the maximum number of 1 hosts allowed for your organization. Contact your System Administrator for assistance.",
"- name: Set Up a Project and Job Template hosts: host.name.ip become: true tasks: - name: Create a Project ansible.controller.project: name: Job Template Test Project state: present scm_type: git scm_url: https://github.com/ansible/ansible-tower-samples.git - name: Create a Job Template ansible.controller.job_template: name: my-job-1 project: Job Template Test Project inventory: Demo Inventory playbook: hello_world.yml job_type: run state: present",
"ansible-galaxy role init <role_name>",
"ansible-galaxy role init my_role",
"~/.ansible/collections/ansible_collections/<my_namespace>/<my_collection_name> └── roles/ └── my_role/ ├── .travis.yml ├── README.md ├── defaults/ │ └── main.yml ├── files/ ├── handlers/ │ └── main.yml ├── meta/ │ └── main.yml ├── tasks/ │ └── main.yml ├── templates/ ├── tests/ │ ├── inventory │ └── test.yml └── vars/ └── main.yml",
"ansible-galaxy role init my_role --role-skeleton ~/role_skeleton",
"ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET",
"podman login registry.redhat.io",
"USDpodman pull registry.redhat.io/aap/<image name>",
"- name: Listen for storage-monitor events hosts: all sources: - ansible.eda.webhook: host: 0.0.0.0 port: 5000 rules: - name: Rule - Print event information condition: event.meta.headers is defined action: run_job_template: name: StorageRemediation organization: Default job_args: extra_vars: message: from eda sleep: 1",
"kind: Route apiVersion: route.openshift.io/v1 metadata: name: test-sync-bug namespace: dynatrace labels: app: eda job-name: activation-job-1-5000 spec: host: test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com to: kind: Service name: activation-job-1-5000 weight: 100 port: targetPort: 5000 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None",
"curl -H \"Content-Type: application/json\" -X POST test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com -d '{}'",
"- name: My first play hosts: myhosts tasks: - name: Ping my hosts ansible.builtin.ping: - name: Print message ansible.builtin.debug: msg: Hello world",
"ansible-playbook -i inventory.ini playbook.yaml",
"PLAY [My first play] ******************************************************** TASK [Gathering Facts] ****************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Ping my hosts] ******************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Print message] ******************************************************** ok: [192.0.2.50] => { \"msg\": \"Hello world\" } ok: [192.0.2.51] => { \"msg\": \"Hello world\" } ok: [192.0.2.52] => { \"msg\": \"Hello world\" } PLAY RECAP ****************************************************************** 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"ansible-galaxy role init <role_name>",
"ansible-galaxy role init my_role",
"~/.ansible/collections/ansible_collections/<my_namespace>/<my_collection_name> └── roles/ └── my_role/ ├── .travis.yml ├── README.md ├── defaults/ │ └── main.yml ├── files/ ├── handlers/ │ └── main.yml ├── meta/ │ └── main.yml ├── tasks/ │ └── main.yml ├── templates/ ├── tests/ │ ├── inventory │ └── test.yml └── vars/ └── main.yml",
"ansible-galaxy role init my_role --role-skeleton ~/role_skeleton",
"ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET",
"podman login registry.redhat.io",
"USDpodman pull registry.redhat.io/aap/<image name>"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/getting_started_with_ansible_automation_platform/index |
Chapter 60. Kernel | Chapter 60. Kernel kexec fails when secondary cores do not offline Under certain circumstances, secondary-core offlining fails on AppliedMicro X-Gene platforms like HP ProLiant m400 and AppliedMicro Mustang. As a consequence, the kernel sometimes fails to trigger the kdump crash dump mechanism through kexec when a kernel panic occurs. As a result, the kernel crash dump file is not saved. (BZ#1218374) File-system corruption due to incorrect flushing of cache has been fixed but I/O operations can be slower Due to a bug in the megaraid_sas driver, file-system corruption previously occurred in some cases when the file system was used with a disk-write back cache during system shutdown, reboot, or power loss. This update fixes megaraid_sas to transfer the flush cache commands correctly to the raid card. As a result, if you also update the raid card firmware, the file-system corruption no longer occurs under the described circumstances. With Broadcom megaraid_sas raid adapter, you can check the functionality in the system log (dmesg). The proper functionality is indicated by the following text string: Note that this fix can slow down I/O operations because the cache is now flushed properly. (BZ#1380447) Wacom Cintiq 12WX is not redetected when unplugged and plugged in quickly When unplugging and quickly plugging in Wacom Cintiq 12WX within the same USB port, the tablet is currently not recognized. To work around this problem, wait 3-5 seconds before plugging the tablet back in. (BZ#1458354) Installing to some IBM POWER8 machines using a Virtual DVD fails when starting GUI Red Hat Enterprise Linux 7.4 can fail to install on some IBM POWER8 hardware (including S822LC machines) while starting the Anaconda GUI. The problem is characterized by errors starting X11, followed by a Pane is dead message in the Anaconda screen. The workaround is to append inst.text to the kernel command line and install in text mode. This issue is confined to Virtual DVD installations, additional testing with the netboot image allows GUI installation. (BZ#1377857) Entering full screen mode using a keyboard shortcut causes display problems on VMWare ESXi 5.5 When using Red Hat Enterprise Linux 7.4 as a virtual machine guest running on a VMWare ESXi 5.5 host, pressing Ctrl+Alt+Enter to enter full screen mode in the console causes the display to become unusable. At the same time, errors such as the following example are saved into the system log ( dmesg ): To work around this problem, shut down the virtual machine, open its .vmx configuration file, and add or modify the following parameters: In the above, replace X and Y with the horizontal and vertical resolution of your screen. The svga.vramSize parameter takes a value that is equal to X times Y times 4. An example setup for a screen with a resolution of 1920x1080 therefore is: Note that VMWare ESXi 5.5 is the only version reported to encounter this bug; other versions can enter full screen mode without problems. (BZ#1451242) KSC currently does not support xz compression The Kernel module Source Checker (the ksc tool) is currently unable to process the xz compression method and reports the following error: Until this limitation is resolved, system administrators should manually uncompress any third party modules using xz compression, before running the ksc tool. (BZ#1463600) | [
"FW supports sync cache Yes",
"[drm:vmw_cmdbuf_work_func [vmwgfx]] *ERROR* Command buffer error.",
"svga.maxWidth = X svga.maxHeight = Y svga.vramSize = \"X * Y * 4\"",
"svga.maxWidth = 1920 svga.maxHeight = 1080 svga.vramSize = \"8294400\"",
"Invalid architecture, supported architectures are x86_64, ppc64, s390x"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_kernel |
7.233. spice-protocol | 7.233. spice-protocol 7.233.1. RHBA-2013:0510 - spice-protocol bug fix and enhancement update Updated spice-protocol packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The spice-protocol packages provide header files to describe the SPICE protocol and the QXL para-virtualized graphics card. The SPICE protocol is needed to build newer versions of the spice-client and the spice-server packages. Note The spice-protocol package has been upgraded to upstream version 0.12.2, which provides a number of enhancements over the version, including support for USB redirection. (BZ# 842352 ) Enhancement BZ# 846910 This update adds support for seamless migration to the spice-protocol packages. All users who build spice packages are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/spice-protocol |
Chapter 6. Restoring IdM servers using Ansible playbooks | Chapter 6. Restoring IdM servers using Ansible playbooks Using the ipabackup Ansible role, you can automate restoring an IdM server from a backup and transferring backup files between servers and your Ansible controller. 6.1. Preparing your Ansible control node for managing IdM As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. By following this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient , ipabackup , ipasmartcard_server and ipasmartcard_client ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: You must enter the IdM admin password when you enter these commands. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory 6.2. Using Ansible to restore an IdM server from a backup stored on the server You can use an Ansible playbook to restore an IdM server from a backup stored on that host. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the LDAP Directory Manager password. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the restore-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the restore-my-server.yml Ansible playbook file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup to restore. Set the ipabackup_password variable to the LDAP Directory Manager password. Save the file. Run the Ansible playbook specifying the inventory file and the playbook file: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.3. Using Ansible to restore an IdM server from a backup stored on your Ansible controller You can use an Ansible playbook to restore an IdM server from a backup stored on your Ansible controller. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the LDAP Directory Manager password. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the restore-server-from-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the restore-my-server-from-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup to restore. Set the ipabackup_password variable to the LDAP Directory Manager password. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.4. Using Ansible to copy a backup of an IdM server to your Ansible controller You can use an Ansible playbook to copy a backup of an IdM server from the IdM server to your Ansible controller. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure To store the backups, create a subdirectory in your home directory on the Ansible controller. Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-server-to-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your IdM server to copy to your Ansible controller. By default, backups are stored in the present working directory of the Ansible controller. To specify the directory you created in Step 1, add the ipabackup_controller_path variable and set it to the /home/user/ipabackups directory. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To copy all IdM backups to your controller, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the copy-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Verification Verify your backup is in the /home/user/ipabackups directory on your Ansible controller: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.5. Using Ansible to copy a backup of an IdM server from your Ansible controller to the IdM server You can use an Ansible playbook to copy a backup of an IdM server from your Ansible controller to the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-controller-to-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your Ansible controller to copy to the IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.6. Using Ansible to remove a backup from an IdM server You can use an Ansible playbook to remove a backup from an IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the remove-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the remove-backup-from-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup to remove from your IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To remove all IdM backups from the IdM server, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the remove-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. | [
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/restore-server.yml restore-my-server.yml",
"--- - name: Playbook to restore an IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 ipabackup_password: <your_LDAP_DM_password> roles: - role: ipabackup state: restored",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory restore-my-server.yml",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/restore-server-from-controller.yml restore-my-server-from-my-controller.yml",
"--- - name: Playbook to restore IPA server from controller hosts: ipaserver become: true vars: ipabackup_name: server.idm.example.com_ipa-full-2021-04-30-13-12-00 ipabackup_password: <your_LDAP_DM_password> ipabackup_from_controller: true roles: - role: ipabackup state: restored",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory restore-my-server-from-my-controller.yml",
"mkdir ~/ipabackups",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-server.yml copy-backup-from-my-server-to-my-controller.yml",
"--- - name: Playbook to copy backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 ipabackup_to_controller: true ipabackup_controller_path: /home/user/ipabackups roles: - role: ipabackup state: present",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-server-to-my-controller.yml",
"vars: ipabackup_name: all ipabackup_to_controller: true",
"[user@controller ~]USD ls /home/user/ipabackups server.idm.example.com_ipa-full-2021-04-30-13-12-00",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-controller.yml copy-backup-from-my-controller-to-my-server.yml",
"--- - name: Playbook to copy a backup from controller to the IPA server hosts: ipaserver become: true vars: ipabackup_name: server.idm.example.com_ipa-full-2021-04-30-13-12-00 ipabackup_from_controller: true roles: - role: ipabackup state: copied",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-controller-to-my-server.yml",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/remove-backup-from-server.yml remove-backup-from-my-server.yml",
"--- - name: Playbook to remove backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 roles: - role: ipabackup state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory remove-backup-from-my-server.yml",
"vars: ipabackup_name: all"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_disaster_recovery_with_identity_management/assembly_restoring-idm-servers-using-ansible-playbooks_performing-disaster-recovery |
Updating clusters | Updating clusters OpenShift Container Platform 4.14 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes",
"oc get apirequestcounts",
"NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H csistoragecapacities.v1.storage.k8s.io 14 380 csistoragecapacities.v1beta1.storage.k8s.io 1.27 0 16 custompolicydefinitions.v1beta1.capabilities.3scale.net 8 158 customresourcedefinitions.v1.apiextensions.k8s.io 1407 30148",
"oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'",
"1.27 csistoragecapacities.v1beta1.storage.k8s.io 1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io",
"oc get apirequestcounts <resource>.<version>.<group> -o yaml",
"oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o yaml",
"oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT",
"VERBS USERNAME USERAGENT list watch system:kube-controller-manager cluster-policy-controller/v0.0.0 list watch system:kube-controller-manager kube-controller-manager/v1.26.5+0abcdef list watch system:kube-scheduler kube-scheduler/v1.26.5+0abcdef",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.13-kube-1.27-api-removals-in-4.14\":\"true\"}}' --type=merge",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc adm upgrade",
"Recommended updates: VERSION IMAGE 4.14.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"RELEASE_IMAGE=<update_pull_spec>",
"quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>",
"RUN depmod -b /opt USD{KERNEL_VERSION}",
"quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863",
"apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.14",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.27.3 ip-10-0-170-223.ec2.internal Ready master 82m v1.27.3 ip-10-0-179-95.ec2.internal Ready worker 70m v1.27.3 ip-10-0-182-134.ec2.internal Ready worker 70m v1.27.3 ip-10-0-211-16.ec2.internal Ready master 82m v1.27.3 ip-10-0-250-100.ec2.internal Ready worker 69m v1.27.3",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms --enable=rhocp-4.14-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms --enable=rhocp-4.14-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.27.3 mycluster-control-plane-1 Ready master 145m v1.27.3 mycluster-control-plane-2 Ready master 145m v1.27.3 mycluster-rhel8-0 Ready worker 98m v1.27.3 mycluster-rhel8-1 Ready worker 98m v1.27.3 mycluster-rhel8-2 Ready worker 98m v1.27.3 mycluster-rhel8-3 Ready worker 98m v1.27.3",
"yum update",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"export OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1",
"oc create -f <filename>.yaml",
"oc create -f update-service-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service",
"oc -n openshift-update-service create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"",
"oc create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-subscription.yaml",
"oc -n openshift-update-service get clusterserviceversions",
"NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded",
"FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]",
"podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest",
"podman push registry.example.com/openshift/graph-data:latest",
"NAMESPACE=openshift-update-service",
"NAME=service",
"RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images",
"GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest",
"oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF",
"while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done",
"while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done",
"NAMESPACE=openshift-update-service",
"NAME=service",
"POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"",
"PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"",
"oc patch clusterversion version -p USDPATCH --type merge",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}",
"sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d",
"oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>",
"skopeo copy docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml",
"oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry",
"oc apply -f imageContentSourcePolicy.yaml",
"oc get ImageContentSourcePolicy -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}",
"oc get updateservice -n openshift-update-service",
"NAME AGE service 6s",
"oc delete updateservice service -n openshift-update-service",
"updateservice.updateservice.operator.openshift.io \"service\" deleted",
"oc project openshift-update-service",
"Now using project \"openshift-update-service\" on server \"https://example.com:6443\".",
"oc get operatorgroup",
"NAME AGE openshift-update-service-fprx2 4m41s",
"oc delete operatorgroup openshift-update-service-fprx2",
"operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted",
"oc get subscription",
"NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1",
"oc get subscription update-service-operator -o yaml | grep \" currentCSV\"",
"currentCSV: update-service-operator.v0.0.1",
"oc delete subscription update-service-operator",
"subscription.operators.coreos.com \"update-service-operator\" deleted",
"oc delete clusterserviceversion update-service-operator.v0.0.1",
"clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.27.3 control-plane-node-1 Ready master 75m v1.27.3 control-plane-node-2 Ready master 75m v1.27.3",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.27.3 compute-node-1 Ready worker 30m v1.27.3 compute-node-2 Ready worker 30m v1.27.3",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml",
"oc describe clusterversions/version",
"Desired: Channels: candidate-4.13 candidate-4.14 fast-4.13 fast-4.14 stable-4.13 Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 URL: https://access.redhat.com/errata/RHSA-2023:6130 Version: 4.13.19 History: Completion Time: 2023-11-07T20:26:04Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 Started Time: 2023-11-07T19:11:36Z State: Completed Verified: true Version: 4.13.19 Completion Time: 2023-10-04T18:53:29Z Image: quay.io/openshift-release-dev/ocp-release@sha256:eac141144d2ecd6cf27d24efe9209358ba516da22becc5f0abc199d25a9cfcec Started Time: 2023-10-04T17:26:31Z State: Completed Verified: true Version: 4.13.13 Completion Time: 2023-09-26T14:21:43Z Image: quay.io/openshift-release-dev/ocp-release@sha256:371328736411972e9640a9b24a07be0af16880863e1c1ab8b013f9984b4ef727 Started Time: 2023-09-26T14:02:33Z State: Completed Verified: false Version: 4.13.12 Observed Generation: 4 Version Hash: CMLl3sLq-EA= Events: <none>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/updating_clusters/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.