title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
24.2. L1 Cache Configuration | 24.2. L1 Cache Configuration 24.2.1. L1 Cache Configuration (Library Mode) The following sample configuration shows the L1 cache default values in Red Hat JBoss Data Grid's Library Mode. Example 24.1. L1 Cache Configuration in Library Mode The l1 element configures the cache behavior in distributed cache instances. If used with non-distributed caches, this element is ignored. The enabled parameter enables the L1 cache. The lifespan parameter sets the maximum life span of an entry when it is placed in the L1 cache. Report a bug 24.2.2. L1 Cache Configuration (Remote Client-Server Mode) The following sample configuration shows the L1 cache default values in Red Hat JBoss Data Grid's Remote Client-Server mode. Example 24.2. L1 Cache Configuration for Remote Client-Server Mode The l1-lifespan element is added to a distributed-cache element to enable L1 caching and to set the life span of the L1 cache entries for the cache. This element is only valid for distributed caches. If l1-lifespan is set to 0 or a negative number ( -1 ), L1 caching is disabled. L1 caching is enabled when the l1-lifespan value is greater than 0 . Note When the cache is accessed remotely via the Hot Rod protocol, the client accesses the owner node directly. Therefore, using L1 Cache in this situation does not offer any performance improvement and is not recommended. Other remote clients (Memcached, REST) cannot target the owner, therefore, using L1 Cache may increase the performance (at the cost of higher memory consumption). Note In Remote Client-Server mode, the L1 cache was enabled by default when distributed cache was used, even if the l1-lifespan attribute is not set. The default lifespan value was 10 minutes. Since JBoss Data Grid 6.3, the default lifespan is 0 which disables the L1 cache. Set a non-zero value for the l1-lifespan parameter to enable the L1 cache. Report a bug | [
"<clustering mode=\"dist\"> <sync/> <l1 enabled=\"true\" lifespan=\"60000\" /> </clustering>",
"<distributed-cache l1-lifespan=\"USD{VALUE}\"> <!-- Additional configuration information here --> </distributed-cache>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-l1_cache_configuration |
Chapter 6. PriorityClass [scheduling.k8s.io/v1] | Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. value integer value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses HTTP method DELETE Description delete collection of PriorityClass Table 6.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body PriorityClass schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.8. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method DELETE Description delete a PriorityClass Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.11. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. Body parameters Parameter Type Description body PriorityClass schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.17. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1 |
Chapter 6. Configuring Debezium connectors for your application | Chapter 6. Configuring Debezium connectors for your application When the default Debezium connector behavior is not right for your application, you can use the following Debezium features to configure the behavior you need. Kafka Connect automatic topic creation Enables Connect to create topics at runtime, and apply configuration settings to those topics based on their names. Avro serialization Support for configuring Debezium PostgreSQL, MongoDB, or SQL Server connectors to use Avro to serialize message keys and value, making it easier for change event record consumers to adapt to a changing record schema. Configuring notifications to report connector status Provides a mechanism to expose status information about a connector through a configurable set of channels. CloudEvents converter Enables a Debezium connector to emit change event records that conform to the CloudEvents specification. Sending signals to a Debezium connector Provides a way to modify the behavior of a connector, or trigger an action, such as initiating an ad hoc snapshot. 6.1. Customization of Kafka Connect automatic topic creation Kafka provides two mechanisms for creating topics automatically. You can enable automatic topic creation for the Kafka broker, and, beginning with Kafka 2.6.0, you can also enable Kafka Connect to create topics. The Kafka broker uses the auto.create.topics.enable property to control automatic topic creation. In Kafka Connect, the topic.creation.enable property specifies whether Kafka Connect is permitted to create topics. In both cases, the default settings for the properties enables automatic topic creation. When automatic topic creation is enabled, if a Debezium source connector emits a change event record for a table for which no target topic already exists, the topic is created at runtime as the event record is ingested into Kafka. Differences between automatic topic creation at the broker and in Kafka Connect Topics that the broker creates are limited to sharing a single default configuration. The broker cannot apply unique configurations to different topics or sets of topics. By contrast, Kafka Connect can apply any of several configurations when creating topics, setting the replication factor, number of partitions, and other topic-specific settings as specified in the Debezium connector configuration. The connector configuration defines a set of topic creation groups, and associates a set of topic configuration properties with each group. The broker configuration and the Kafka Connect configuration are independent of each other. Kafka Connect can create topics regardless of whether you disable topic creation at the broker. If you enable automatic topic creation at both the broker and in Kafka Connect, the Connect configuration takes precedence, and the broker creates topics only if none of the settings in the Kafka Connect configuration apply. See the following topics for more information: Section 6.1.1, "Disabling automatic topic creation for the Kafka broker" Section 6.1.2, "Configuring automatic topic creation in Kafka Connect" Section 6.1.3, "Configuration of automatically created topics" Section 6.1.3.1, "Topic creation groups" Section 6.1.3.2, "Topic creation group configuration properties" Section 6.1.3.3, "Specifying the configuration for the Debezium default topic creation group" Section 6.1.3.4, "Specifying the configuration for Debezium custom topic creation groups" Section 6.1.3.5, "Registering Debezium custom topic creation groups" 6.1.1. Disabling automatic topic creation for the Kafka broker By default, the Kafka broker configuration enables the broker to create topics at runtime if the topics do not already exist. Topics created by the broker cannot be configured with custom properties. If you use a Kafka version earlier than 2.6.0, and you want to create topics with specific configurations, you must to disable automatic topic creation at the broker, and then explicitly create the topics, either manually, or through a custom deployment process. Procedure In the broker configuration, set the value of auto.create.topics.enable to false . 6.1.2. Configuring automatic topic creation in Kafka Connect Automatic topic creation in Kafka Connect is controlled by the topic.creation.enable property. The default value for the property is true , enabling automatic topic creation, as shown in the following example: topic.creation.enable = true The setting for the topic.creation.enable property applies to all workers in the Connect cluster. Kafka Connect automatic topic creation requires you to define the configuration properties that Kafka Connect applies when creating topics. You specify topic configuration properties in the Debezium connector configuration by defining topic groups, and then specifying the properties to apply to each group. The connector configuration defines a default topic creation group, and, optionally, one or more custom topic creation groups. Custom topic creation groups use lists of topic name patterns to specify the topics to which the group's settings apply. For details about how Kafka Connect matches topics to topic creation groups, see Topic creation groups . For more information about how configuration properties are assigned to groups, see Topic creation group configuration properties . By default, topics that Kafka Connect creates are named based on the pattern server.schema.table , for example, dbserver.myschema.inventory . Procedure To prevent Kafka Connect from creating topics automatically, set the value of topic.creation.enable to false in the Kafka Connect custom resource, as in the following example: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster ... spec: config: topic.creation.enable: "false" Note Kafka Connect automatic topic creation requires the replication.factor and partitions properties to be set for at least the default topic creation group. It is valid for groups to obtain the values for the required properties from the default values for the Kafka broker. 6.1.3. Configuration of automatically created topics For Kafka Connect to create topics automatically, it requires information from the source connector about the configuration properties to apply when creating topics. You define the properties that control topic creation in the configuration for each Debezium connector. As Kafka Connect creates topics for event records that a connector emits, the resulting topics obtain their configuration from the applicable group. The configuration applies to event records emitted by that connector only. 6.1.3.1. Topic creation groups A set of topic properties is associated with a topic creation group. Minimally, you must define a default topic creation group and specify its configuration properties. Beyond that you can optionally define one or more custom topic creation groups and specify unique properties for each. When you create custom topic creation groups, you define the member topics for each group based on topic name patterns. You can specify naming patterns that describe the topics to include or exclude from each group. The include and exclude properties contain comma-separated lists of regular expressions that define topic name patterns. For example, if you want a group to include all topics that start with the string dbserver1.inventory , set the value of its topic.creation.inventory.include property to dbserver1\\.inventory\\.* . Note If you specify both include and exclude properties for a custom topic group, the exclusion rules take precedence, and override the inclusion rules. 6.1.3.2. Topic creation group configuration properties The default topic creation group and each custom group is associated with a unique set of configuration properties. You can configure a group to include any of the Kafka topic-level configuration properties . For example, you can specify the cleanup policy for old topic segments , retention time , or the topic compression type for a topic group. You must define at least a minimum set of properties to describe the configuration of the topics to be created. If no custom groups are registered, or if the include patterns for any registered groups don't match the names of any topics to be created, then Kafka Connect uses the configuration of the default group to create topics. For general information about configuring topics, see Kafka topic creation recommendations in Installing Debezium on OpenShift. 6.1.3.3. Specifying the configuration for the Debezium default topic creation group Before you can use Kafka Connect automatic topic creation, you must create a default topic creation group and define a configuration for it. The configuration for the default topic creation group is applied to any topics with names that do not match the include list pattern of a custom topic creation group. Prerequisites In the Kafka Connect custom resource, the use-connector-resources value in metadata.annotations specifies that the cluster Operator uses KafkaConnector custom resources to configure connectors in the cluster. For example: ... metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" ... Procedure To define properties for the topic.creation.default group, add them to spec.config in the connector custom resource, as shown in the following example: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector labels: strimzi.io/cluster: my-connect-cluster spec: ... config: ... topic.creation.default.replication.factor: 3 1 topic.creation.default.partitions: 10 2 topic.creation.default.cleanup.policy: compact 3 topic.creation.default.compression.type: lz4 4 ... You can include any Kafka topic-level configuration property in the configuration for the default group. Table 6.1. Connector configuration for the default topic creation group Item Description 1 topic.creation.default.replication.factor defines the replication factor for topics created by the default group. replication.factor is mandatory for the default group but optional for custom groups. Custom groups will fall back to the default group's value if not set. Use -1 to use the Kafka broker's default value. 2 topic.creation.default.partitions defines the number of partitions for topics created by the default group. partitions is mandatory for the default group but optional for custom groups. Custom groups will fall back to the default group's value if not set. Use -1 to use the Kafka broker's default value. 3 topic.creation.default.cleanup.policy is mapped to the cleanup.policy property of the topic level configuration parameters and defines the log retention policy. 4 topic.creation.default.compression.type is mapped to the compression.type property of the topic level configuration parameters and defines how messages are compressed on hard disk. Note Custom groups fall back to the default group settings only for the required replication.factor and partitions properties. If the configuration for a custom topic group leaves other properties undefined, the values specified in the default group are not applied. 6.1.3.4. Specifying the configuration for Debezium custom topic creation groups You can define multiple custom topic groups, each with its own configuration. Procedure To define a custom topic group, add a topic.creation. <group_name> .include property to spec.config in the connector custom resource, followed by the configuration properties that you want to apply to topics in the custom group. The following example shows an excerpt of a custom resource that defines the custom topic creation groups inventory and applicationlogs : apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector ... spec: ... config: ... 1 topic.creation.inventory.include: dbserver1\\.inventory\\.* 2 topic.creation.inventory.partitions: 20 topic.creation.inventory.cleanup.policy: compact topic.creation.inventory.delete.retention.ms: 7776000000 3 topic.creation.applicationlogs.include: dbserver1\\.logs\\.applog-.* 4 topic.creation.applicationlogs.exclude": dbserver1\\.logs\\.applog-old-.* 5 topic.creation.applicationlogs.replication.factor: 1 topic.creation.applicationlogs.partitions: 20 topic.creation.applicationlogs.cleanup.policy: delete topic.creation.applicationlogs.retention.ms: 7776000000 topic.creation.applicationlogs.compression.type: lz4 ... ... Table 6.2. Connector configuration for custom inventory and applicationlogs topic creation groups Item Description 1 Defines the configuration for the inventory group. The replication.factor and partitions properties are optional for custom groups. If no value is set, custom groups fall back to the value set for the default group. Set the value to -1 to use the value that is set for the Kafka broker. 2 topic.creation.inventory.include defines a regular expression to match all topics that start with dbserver1.inventory. . The configuration that is defined for the inventory group is applied only to topics with names that match the specified regular expression. 3 Defines the configuration for the applicationlogs group. The replication.factor and partitions properties are optional for custom groups. If no value is set, custom groups fall back to the value set for the default group. Set the value to -1 to use the value that is set for the Kafka broker. 4 topic.creation.applicationlogs.include defines a regular expression to match all topics that start with dbserver1.logs.applog- . The configuration that is defined for the applicationlogs group is applied only to topics with names that match the specified regular expression. Because an exclude property is also defined for this group, the topics that match the include regular expression might be further restricted by the that exclude property. 5 topic.creation.applicationlogs.exclude defines a regular expression to match all topics that start with dbserver1.logs.applog-old- . The configuration that is defined for the applicationlogs group is applied only to topics with name that do not match the given regular expression. Because an include property is also defined for this group, the configuration of the applicationlogs group is applied only to topics with names that match the specified include regular expressions and that do not match the specified exclude regular expressions. 6.1.3.5. Registering Debezium custom topic creation groups After you specify the configuration for any custom topic creation groups, register the groups. Procedure Register custom groups by adding the topic.creation.groups property to the connector custom resource, and specifying a comma-separated list of custom topic creation groups. The following excerpt from a connector custom resource registers the custom topic creation groups inventory and applicationlogs : apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector ... spec: ... config: topic.creation.groups: inventory,applicationlogs ... Completed configuration The following example shows a completed configuration that includes the configuration for a default topic group, along with the configurations for an inventory and an applicationlogs custom topic creation group: Example: Configuration for a default topic creation group and two custom groups apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector ... spec: ... config: ... topic.creation.default.replication.factor: 3, topic.creation.default.partitions: 10, topic.creation.default.cleanup.policy: compact topic.creation.default.compression.type: lz4 topic.creation.groups: inventory,applicationlogs topic.creation.inventory.include: dbserver1\\.inventory\\.* topic.creation.inventory.partitions: 20 topic.creation.inventory.cleanup.policy: compact topic.creation.inventory.delete.retention.ms: 7776000000 topic.creation.applicationlogs.include: dbserver1\\.logs\\.applog-.* topic.creation.applicationlogs.exclude": dbserver1\\.logs\\.applog-old-.* topic.creation.applicationlogs.replication.factor: 1 topic.creation.applicationlogs.partitions: 20 topic.creation.applicationlogs.cleanup.policy: delete topic.creation.applicationlogs.retention.ms: 7776000000 topic.creation.applicationlogs.compression.type: lz4 ... 6.2. Configuring Debezium connectors to use Avro serialization A Debezium connector works in the Kafka Connect framework to capture each row-level change in a database by generating a change event record. For each change event record, the Debezium connector completes the following actions: Applies configured transformations. Serializes the record key and value into a binary form by using the configured Kafka Connect converters . Writes the record to the correct Kafka topic. You can specify converters for each individual Debezium connector instance. Kafka Connect provides a JSON converter that serializes the record keys and values into JSON documents. The default behavior is that the JSON converter includes the record's message schema, which makes each record very verbose. The Getting Started with Debezium guide shows what the records look like when both payload and schemas are included. If you want records to be serialized with JSON, consider setting the following connector configuration properties to false : key.converter.schemas.enable value.converter.schemas.enable Setting these properties to false excludes the verbose schema information from each record. Alternatively, you can serialize the record keys and values by using Apache Avro . The Avro binary format is compact and efficient. Avro schemas make it possible to ensure that each record has the correct structure. Avro's schema evolution mechanism enables schemas to evolve. This is essential for Debezium connectors, which dynamically generate each record's schema to match the structure of the database table that was changed. Over time, change event records written to the same Kafka topic might have different versions of the same schema. Avro serialization makes it easier for the consumers of change event records to adapt to a changing record schema. To use Apache Avro serialization, you must deploy a schema registry that manages Avro message schemas and their versions. For information about setting up this registry, see the documentation for Installing and deploying Red Hat build of Apicurio Registry on OpenShift . 6.2.1. About the Apicurio Registry Red Hat build of Apicurio Registry Red Hat build of Apicurio Registry provides the following components that work with Avro: An Avro converter that you can specify in Debezium connector configurations. This converter maps Kafka Connect schemas to Avro schemas. The converter then uses the Avro schemas to serialize the record keys and values into Avro's compact binary form. An API and schema registry that tracks: Avro schemas that are used in Kafka topics. Where the Avro converter sends the generated Avro schemas. Because the Avro schemas are stored in this registry, each record needs to contain only a tiny schema identifier . This makes each record even smaller. For an I/O bound system like Kafka, this means more total throughput for producers and consumers. Avro Serdes (serializers and deserializers) for Kafka producers and consumers. Kafka consumer applications that you write to consume change event records can use Avro Serdes to deserialize the change event records. To use the Apicurio Registry with Debezium, add Apicurio Registry converters and their dependencies to the Kafka Connect container image that you are using for running a Debezium connector. Note The Apicurio Registry project also provides a JSON converter. This converter combines the advantage of less verbose messages with human-readable JSON. Messages do not contain the schema information themselves, but only a schema ID. Note To use converters provided by Apicurio Registry you need to provide apicurio.registry.url . 6.2.2. Overview of deploying a Debezium connector that uses Avro serialization To deploy a Debezium connector that uses Avro serialization, you must complete three main tasks: Deploy a Red Hat build of Apicurio Registry instance by following the instructions in Installing and deploying Red Hat build of Apicurio Registry on OpenShift . Install the Avro converter by downloading the Debezium Service Registry Kafka Connect zip file and extracting it into the Debezium connector's directory. Configure a Debezium connector instance to use Avro serialization by setting configuration properties as follows: Internally, Kafka Connect always uses JSON key/value converters for storing configuration and offsets. 6.2.3. Deploying connectors that use Avro in Debezium containers In your environment, you might want to use a provided Debezium container to deploy Debezium connectors that use Avro serialization. Complete the following procedure to build a custom Kafka Connect container image for Debezium, and configure the Debezium connector to use the Avro converter. Prerequisites You have Docker installed and sufficient rights to create and manage containers. You downloaded the Debezium connector plug-in(s) that you want to deploy with Avro serialization. Procedure Deploy an instance of Apicurio Registry. See Installing and deploying Red Hat build of Apicurio Registry on OpenShift , which provides instructions for: Installing Apicurio Registry Installing AMQ Streams Setting up AMQ Streams storage Extract the Debezium connector archives to create a directory structure for the connector plug-ins. If you downloaded and extracted the archives for multiple Debezium connectors, the resulting directory structure looks like the one in the following example: Add the Avro converter to the directory that contains the Debezium connector that you want to configure to use Avro serialization: Go to the Software Downloads and download the Apicurio Registry Kafka Connect zip file. Extract the archive into the desired Debezium connector directory. To configure more than one type of Debezium connector to use Avro serialization, extract the archive into the directory for each relevant connector type. Although extracting the archive to each directory duplicates the files, by doing so you remove the possibility of conflicting dependencies. Create and publish a custom image for running Debezium connectors that are configured to use the Avro converter: Create a new Dockerfile by using registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 as the base image. In the following example, replace my-plugins with the name of your plug-ins directory: Before Kafka Connect starts running the connector, Kafka Connect loads any third-party plug-ins that are in the /opt/kafka/plugins directory. Build the docker container image. For example, if you saved the docker file that you created in the step as debezium-container-with-avro , then you would run the following command: docker build -t debezium-container-with-avro:latest Push your custom image to your container registry, for example: docker push <myregistry.io> /debezium-container-with-avro:latest Point to the new container image. Do one of the following: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... image: debezium-container-with-avro In the install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable to point to the new container image and reinstall the Cluster Operator. If you edit this file you will need to apply it to your OpenShift cluster. Deploy each Debezium connector that is configured to use the Avro converter. For each Debezium connector: Create a Debezium connector instance. The following inventory-connector.yaml file example creates a KafkaConnector custom resource that defines a MySQL connector instance that is configured to use the Avro converter: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 1 config: database.hostname: mysql database.port: 3306 database.user: debezium database.password: dbz database.server.id: 184054 topic.prefix: dbserver1 database.include.list: inventory schema.history.internal.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 schema.history.internal.kafka.topic: schema-changes.inventory schema.name.adjustment.mode: avro key.converter: io.apicurio.registry.utils.converter.AvroConverter key.converter.apicurio.registry.url: http://apicurio:8080/api key.converter.apicurio.registry.global-id: io.apicurio.registry.utils.serde.strategy.GetOrCreateIdStrategy value.converter: io.apicurio.registry.utils.converter.AvroConverter value.converter.apicurio.registry.url: http://apicurio:8080/api value.converter.apicurio.registry.global-id: io.apicurio.registry.utils.serde.strategy.GetOrCreateIdStrategy Apply the connector instance, for example: oc apply -f inventory-connector.yaml This registers inventory-connector and the connector starts to run against the inventory database. Verify that the connector was created and has started to track changes in the specified database. You can verify the connector instance by watching the Kafka Connect log output as, for example, inventory-connector starts. Display the Kafka Connect log output: oc logs USD(oc get pods -o name -l strimzi.io/name=my-connect-cluster-connect) Review the log output to verify that the initial snapshot has been executed. You should see something like the following lines: ... 2020-02-21 17:57:30,801 INFO Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'debezium' with locking mode 'minimal' (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,805 INFO Snapshot is using user 'debezium' with these MySQL grants: (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] ... Taking the snapshot involves a number of steps: ... 2020-02-21 17:57:30,822 INFO Step 0: disabling autocommit, enabling repeatable read transactions, and setting lock wait timeout to 10 (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,836 INFO Step 1: flush and obtain global read lock to prevent writes to database (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,839 INFO Step 2: start transaction with consistent snapshot (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,840 INFO Step 3: read binlog position of MySQL primary server (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,843 INFO using binlog 'mysql-bin.000003' at position '154' and gtid '' (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] ... 2020-02-21 17:57:34,423 INFO Step 9: committing transaction (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:34,424 INFO Completed snapshot in 00:00:03.632 (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] ... After completing the snapshot, Debezium begins tracking changes in, for example, the inventory database's binlog for change events: ... 2020-02-21 17:57:35,584 INFO Transitioning from the snapshot reader to the binlog reader (io.debezium.connector.mysql.ChainedReader) [task-thread-inventory-connector-0] 2020-02-21 17:57:35,613 INFO Creating thread debezium-mysqlconnector-dbserver1-binlog-client (io.debezium.util.Threads) [task-thread-inventory-connector-0] 2020-02-21 17:57:35,630 INFO Creating thread debezium-mysqlconnector-dbserver1-binlog-client (io.debezium.util.Threads) [blc-mysql:3306] Feb 21, 2020 5:57:35 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect INFO: Connected to mysql:3306 at mysql-bin.000003/154 (sid:184054, cid:5) 2020-02-21 17:57:35,775 INFO Connected to MySQL binlog at mysql:3306, starting at binlog file 'mysql-bin.000003', pos=154, skipping 0 events plus 0 rows (io.debezium.connector.mysql.BinlogReader) [blc-mysql:3306] ... 6.2.4. About Avro name requirements As stated in the Avro documentation , names must adhere to the following rules: Start with [A-Za-z_] Subsequently contains only [A-Za-z0-9_] characters Debezium uses the column's name as the basis for the corresponding Avro field. This can lead to problems during serialization if the column name does not also adhere to the Avro naming rules. Each Debezium connector provides a configuration property, field.name.adjustment.mode that you can set to avro if you have columns that do not adhere to Avro rules for names. Setting field.name.adjustment.mode to avro allows serialization of non-conformant fields without having to actually modify your schema. 6.3. Emitting Debezium change event records in CloudEvents format CloudEvents is a specification for describing event data in a common way. Its aim is to provide interoperability across services, platforms and systems. Debezium enables you to configure a Db2, MongoDB, MySQL, Oracle, PostgreSQL, or SQL Server connector to emit change event records that conform to the CloudEvents specification. Important Emitting change event records in CloudEvents format is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope . The CloudEvents specification defines: A set of standardized event attributes Rules for defining custom attributes Encoding rules for mapping event formats to serialized representations such as JSON or Apache Avro Protocol bindings for transport layers such as Apache Kafka, HTTP or AMQP To configure a Debezium connector to emit change event records that conform to the CloudEvents specification, Debezium provides the io.debezium.converters.CloudEventsConverter , which is a Kafka Connect message converter. Currently, only structured mapping mode can be used. The CloudEvents change event envelope can be JSON or Avro, and you can use JSON or Avro as the data format for each envelope type. Information about emitting change events in CloudEvents format is organized as follows: Section 6.3.1, "Example Debezium change event records in CloudEvents format" Section 6.3.2, "Example of configuring Debezium CloudEvents converter" Section 6.3.4, "Debezium CloudEvents converter configuration options" For information about using Avro, see: Avro serialization Apicurio Registry 6.3.1. Example Debezium change event records in CloudEvents format The following example shows what a CloudEvents change event record emitted by a PostgreSQL connector looks like. In this example, the PostgreSQL connector is configured to use JSON as the CloudEvents format envelope and also as the data format. { "id" : "name:test_server;lsn:29274832;txId:565", 1 "source" : "/debezium/postgresql/test_server", 2 "specversion" : "1.0", 3 "type" : "io.debezium.connector.postgresql.DataChangeEvent", 4 "time" : "2020-01-13T13:55:39.738Z", 5 "datacontenttype" : "application/json", 6 "iodebeziumop" : "r", 7 "iodebeziumversion" : "2.7.3.Final", 8 "iodebeziumconnector" : "postgresql", "iodebeziumname" : "test_server", "iodebeziumtsms" : "1578923739738", "iodebeziumsnapshot" : "true", "iodebeziumdb" : "postgres", "iodebeziumschema" : "s1", "iodebeziumtable" : "a", "iodebeziumlsn" : "29274832", "iodebeziumxmin" : null, "iodebeziumtxid": "565", 9 "iodebeziumtxtotalorder": "1", "iodebeziumtxdatacollectionorder": "1", "data" : { 10 "before" : null, "after" : { "pk" : 1, "name" : "Bob" } } } Table 6.3. Descriptions of fields in a CloudEvents change event record Item Description 1 Unique ID that the connector generates for the change event based on the change event's content. 2 The source of the event, which is the logical name of the database as specified by the topic.prefix property in the connector's configuration. 3 The CloudEvents specification version. 4 Connector type that generated the change event. The format of this field is io.debezium.connector. CONNECTOR_TYPE .DataChangeEvent . Valid values for CONNECTOR_TYPE are db2 , mongodb , mysql , oracle , postgresql , or sqlserver . 5 Time of the change in the source database. 6 Describes the content type of the data attribute. Possible values are json , as in this example, or avro . 7 An operation identifier. Possible values are r for read, c for create, u for update, or d for delete. 8 All source attributes that are known from Debezium change events are mapped to CloudEvents extension attributes by using the iodebezium prefix for the attribute name. 9 When enabled in the connector, each transaction attribute that is known from Debezium change events is mapped to a CloudEvents extension attribute by using the iodebeziumtx prefix for the attribute name. 10 The actual data change. Depending on the operation and the connector, the data might contain before , after , or patch fields. The following example also shows what a CloudEvents change event record emitted by a PostgreSQL connector looks like. In this example, the PostgreSQL connector is again configured to use JSON as the CloudEvents format envelope, but this time the connector is configured to use Avro for the data format. { "id" : "name:test_server;lsn:33227720;txId:578", "source" : "/debezium/postgresql/test_server", "specversion" : "1.0", "type" : "io.debezium.connector.postgresql.DataChangeEvent", "time" : "2020-01-13T14:04:18.597Z", "datacontenttype" : "application/avro", 1 "dataschema" : "http://my-registry/schemas/ids/1", 2 "iodebeziumop" : "r", "iodebeziumversion" : "2.7.3.Final", "iodebeziumconnector" : "postgresql", "iodebeziumname" : "test_server", "iodebeziumtsms" : "1578924258597", "iodebeziumsnapshot" : "true", "iodebeziumdb" : "postgres", "iodebeziumschema" : "s1", "iodebeziumtable" : "a", "iodebeziumtxId" : "578", "iodebeziumlsn" : "33227720", "iodebeziumxmin" : null, "iodebeziumtxid": "578", "iodebeziumtxtotalorder": "1", "iodebeziumtxdatacollectionorder": "1", "data" : "AAAAAAEAAgICAg==" 3 } Table 6.4. Descriptions of fields in a CloudEvents event record for a connector that uses Avro to format data Item Description 1 Indicates that the data attribute contains Avro binary data. 2 URI of the schema to which the Avro data adheres. 3 The data attribute contains base64-encoded Avro binary data. It is also possible to use Avro for the envelope as well as the data attribute. 6.3.2. Example of configuring Debezium CloudEvents converter Configure io.debezium.converters.CloudEventsConverter in your Debezium connector configuration. The following example shows how to configure the CloudEvents converter to emit change event records that have the following characteristics: Use JSON as the envelope. Use the schema registry at http://my-registry/schemas/ids/1 to serialize the data attribute as binary Avro data. ... "value.converter": "io.debezium.converters.CloudEventsConverter", "value.converter.serializer.type" : "json", 1 "value.converter.data.serializer.type" : "avro", "value.converter.avro.schema.registry.url": "http://my-registry/schemas/ids/1" ... Table 6.5. Description of fields in CloudEvents converter configuration Item Description 1 Specifying the serializer.type is optional, because json is the default. The CloudEvents converter converts Kafka record values. In the same connector configuration, you can specify key.converter if you want to operate on record keys. For example, you might specify StringConverter , LongConverter , JsonConverter , or AvroConverter . 6.3.3. Configuration of sources of metadata and some CloudEvents fields By default, the metadata.source property consists of three parts, as seen in the following example: "value,id:generate,type:generate,dataSchemaName:generate" The first part specifies the source for retrieving a record's metadata; the permitted values are value and header . The parts specify how the converter populates values for the following metadata fields: id type dataSchemaName (the name under which the schema is registered in the Schema Registry) The converter can use one of the following methods to populate each field: generate The converter generates a value for the field. header The converter obtain values for the field from a message header. Obtaining record metadata To construct a CloudEvent, the converter requires source, operation, and transaction metadata. Generally, the converter can retrieve the metadata from a record's value. But in some cases, before the converter receives a record, the record might be processed in such a way that metadata is not present in its value, for example, after the record is processed by the Outbox Event Router SMT. To preserve the required metadata, you can use the following approach to pass the metadata in the record headers. Procedure Implement a mechanism for recording the metadata in the record's headers before the record reaches the converter, for example, by using the HeaderFrom SMT. Set the value of the converter's metadata.source property to header . The following example shows the configuration for a connector that uses the Outbox Event Router SMT, and the HeaderFrom SMT: ... "tombstones.on.delete": false, "transforms": "addMetadataHeaders,outbox", "transforms.addMetadataHeaders.type": "org.apache.kafka.connect.transforms.HeaderFromUSDValue", "transforms.addMetadataHeaders.fields": "source,op,transaction", "transforms.addMetadataHeaders.headers": "source,op,transaction", "transforms.addMetadataHeaders.operation": "copy", "transforms.addMetadataHeaders.predicate": "isHeartbeat", "transforms.addMetadataHeaders.negate": true, "transforms.outbox.type": "io.debezium.transforms.outbox.EventRouter", "transforms.outbox.table.expand.json.payload": true, "transforms.outbox.table.fields.additional.placement": "type:header", "predicates": "isHeartbeat", "predicates.isHeartbeat.type": "org.apache.kafka.connect.transforms.predicates.TopicNameMatches", "predicates.isHeartbeat.pattern": "__debezium-heartbeat.*", "value.converter": "io.debezium.converters.CloudEventsConverter", "value.converter.metadata.source": "header", "header.converter": "org.apache.kafka.connect.json.JsonConverter", "header.converter.schemas.enable": true ... Note To use the HeaderFrom transformation, it might be necessary to filter tombstone and heartbeat messages. The header value of the metadata.source property is a global setting. As a result, even if you omit parts of a property's value, such as the id and type sources, the converter generates header values for the omitted parts. Obtaining CloudEvent metadata By default, the CloudEvents converter automatically generates values for the id and type fields of a CloudEvent, and generates the schema name for its data field. You can customize the way that the converter populates these fields by changing the defaults and specifying the fields' values in the appropriate headers. For example: "value.converter.metadata.source": "value,id:header,type:header,dataSchemaName:header" With the preceding configuration in effect, you could configure upstream functions to add id and type headers with the values that you want to pass to the CloudEvents converter. If you want to provide values only for id header, use: "value.converter.metadata.source": "value,id:header,type:generate,dataSchemaName:generate" To configure the converter to obtain id , type , and dataSchemaName metadata from headers, use the following short syntax: "value.converter.metadata.source": "header" To enable the converter to retrieve the data schema name from a header field, you must set schema.data.name.source.header.enable to true . 6.3.4. Debezium CloudEvents converter configuration options When you configure a Debezium connector to use the CloudEvent converter you can specify the following options. Table 6.6. Descriptions of CloudEvents converter configuration options Option Default Description serializer.type json The encoding type to use for the CloudEvents envelope structure. The value can be json or avro . data.serializer.type json The encoding type to use for the data attribute. The value can be json or avro . json. ... N/A Any configuration options to be passed through to the underlying converter when using JSON. The json. prefix is removed. avro. ... N/A Any configuration options to be passed through to the underlying converter when using Avro. The avro. prefix is removed. For example, for Avro data , you would specify the avro.schema.registry.url option. schema.name.adjustment.mode none Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. The value can be none or avro . schema.cloudevents.name none Specifies CloudEvents schema name under which the schema is registered in a Schema Registry. The setting is ignored when serializer.type is json in which case a record's value is schemaless. If this property is not specified, the default algorithm is used to generate the schema name: USD{serverName}.USD{databaseName}.CloudEvents.Envelope . schema.data.name.source.header.enable false Specifies whether the converter can retrieve the schema name of the CloudEvents data field from a header. The schema name is obtained from the dataSchemaName parameter that is specified in the metadata.source property. extension.attributes.enable true Specifies whether the converter includes extension attributes when it generates a cloud event. The value can be true or false . metadata.source value,id:generate,type:generate,dataSchemaName:generate A comma-separated list that specifies the sources from which the converter retrieves metadata values (source, operation, transaction) for CloudEvent id and type fields, and for the dataSchemaName parameter, which specifies the name under which the schema is registered in a Schema Registry. The first element in the list is a global setting that specifies the source of the metadata. The source of metadata can be value or header . The global setting is followed by a set of pairs. The first element in each pair specifies the name of a CloudEvent field ( id or type ), or the name of a data schema ( dataSchemaName ). The second element in the pair specifies how the converter populates the value of the field. Valid values are generate or header . Separate the values in each pair with a colon, for example: value,id:header,type:generate,dataSchemaName:header For configuration examples, see Configuration of sources of metadata and some CloudEvents fields . 6.4. Configuring notifications to report connector status Debezium notifications provide a mechanism to obtain status information about the connector. Notifications can be sent to the following channels: SinkNotificationChannel Sends notifications through the Connect API to a configured topic. LogNotificationChannel Notifications are appended to the log. JmxNotificationChannel Notifications are exposed as an attribute in a JMX bean. For details about Debezium notifications, see the following topics Section 6.4.1, "Description of the format of Debezium notifications" Section 6.4.2, "Types of Debezium notifications" Section 6.4.3, "Enabling Debezium to emit events to notification channels" 6.4.1. Description of the format of Debezium notifications Notification messages contain the following information: Property Description id A unique identifier that is assigned to the notification. For incremental snapshot notifications, the id is the same sent with the execute-snapshot signal. aggregate_type The data type of the aggregate root to which a notification is related. In domain-driven design, exported events should always refer to an aggregate. type Provides status information about the event specified in the aggregate_type field. additional_data A Map<String,String> with detailed information about the notification. For an example, see Debezium notifications about the progress of incremental snapshots . timestamp The time when the notification was created. The value represents the number of milliseconds since the UNIX epoch. 6.4.2. Types of Debezium notifications Debezium notifications deliver information about the progress of initial snapshots or incremental snapshots . Debezium notifications about the status of an initial snapshot The following example shows a typical notification that provides the status of an initial snapshot: { "id": "5563ae14-49f8-4579-9641-c1bbc2d76f99", "aggregate_type": "Initial Snapshot", "type": "COMPLETED", 1 "additional_data" : { "connector_name": "myConnector" }, "timestamp": "1695817046353" } Item Description 1 The type field can contain one of the following values: COMPLETED ABORTED SKIPPED The following table shows examples of the different payloads that might be present in notifications that report the status of initial snapshots: Status Payload STARTED { "id":"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f", "aggregate_type":"Initial Snapshot", "type":"STARTED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } IN_PROGRESS { "id":"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d", "aggregate_type":"Initial Snapshot", "type":"IN_PROGRESS", "additional_data":{ "connector_name":"my-connector", "data_collections":"table1, table2", "current_collection_in_progress":"table1" }, "timestamp": "1695817046353" } Field data_collection are currently not supported for MongoDB connector TABLE_SCAN_COMPLETED { "id":"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d", "aggregate_type":"Initial Snapshot", "type":"TABLE_SCAN_COMPLETED", "additional_data":{ "connector_name":"my-connector", "data_collection":"table1, table2", "scanned_collection":"table1", "total_rows_scanned":"100", "status":"SUCCEEDED" }, "timestamp": "1695817046353" } In the preceding example, the additional_data.status field can contain one of the following values: SQL_EXCEPTION A SQL exception occurred while performing the snapshot. SUCCEEDED The snapshot completed successfully. Fields total_rows_scanned and data_collection are currently not supported for MongoDB connector COMPLETED { "id":"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f", "aggregate_type":"Initial Snapshot", "type":"COMPLETED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } ABORTED { "id":"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f", "aggregate_type":"Initial Snapshot", "type":"ABORTED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } SKIPPED { "id":"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f", "aggregate_type":"Initial Snapshot", "type":"SKIPPED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } 6.4.2.1. Example: Debezium notifications that report on the progress of incremental snapshots The following table shows examples of the different payloads that might be present in notifications that report the status of incremental snapshots: Status Payload Start { "id":"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f", "aggregate_type":"Incremental Snapshot", "type":"STARTED", "additional_data":{ "connector_name":"my-connector", "data_collections":"table1, table2" }, "timestamp": "1695817046353" } Paused { "id":"068d07a5-d16b-4c4a-b95f-8ad061a69d51", "aggregate_type":"Incremental Snapshot", "type":"PAUSED", "additional_data":{ "connector_name":"my-connector", "data_collections":"table1, table2" }, "timestamp": "1695817046353" } Resumed { "id":"a9468204-769d-430f-96d2-b0933d4839f3", "aggregate_type":"Incremental Snapshot", "type":"RESUMED", "additional_data":{ "connector_name":"my-connector", "data_collections":"table1, table2" }, "timestamp": "1695817046353" } Stopped { "id":"83fb3d6c-190b-4e40-96eb-f8f427bf482c", "aggregate_type":"Incremental Snapshot", "type":"ABORTED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } Processing chunk { "id":"d02047d6-377f-4a21-a4e9-cb6e817cf744", "aggregate_type":"Incremental Snapshot", "type":"IN_PROGRESS", "additional_data":{ "connector_name":"my-connector", "data_collections":"table1, table2", "current_collection_in_progress":"table1", "maximum_key":"100", "last_processed_key":"50" }, "timestamp": "1695817046353" } Snapshot completed for a table { "id":"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d", "aggregate_type":"Incremental Snapshot", "type":"TABLE_SCAN_COMPLETED", "additional_data":{ "connector_name":"my-connector", "data_collection":"table1, table2", "scanned_collection":"table1", "total_rows_scanned":"100", "status":"SUCCEEDED" }, "timestamp": "1695817046353" } In the preceding example, the additional_data.status field can contain one of the following values: EMPTY The table contains no values. NO_PRIMARY_KEY Cannot complete snapshot; table has no primary key. SKIPPED Cannot complete a snapshots for this type of table. Refer to the logs for details. SQL_EXCEPTION A SQL exception occurred while performing the snapshot. SUCCEEDED The snapshot completed successfully. UNKNOWN_SCHEMA Could not find a schema for the table. Check the logs for the list of known tables. Completed { "id":"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d", "aggregate_type":"Incremental Snapshot", "type":"COMPLETED", "additional_data":{ "connector_name":"my-connector" }, "timestamp": "1695817046353" } 6.4.3. Enabling Debezium to emit events to notification channels To enable Debezium to emit notifications, specify a list of notification channels by setting the notification.enabled.channels configuration property. By default, the following notification channels are available: sink log jmx Important To use the sink notification channel, you must also set the notification.sink.topic.name configuration property to the name of the topic where you want Debezium to send notifications. 6.4.3.1. Enabling Debezium notifications to report events exposed through JMX beans To enable Debezium to report events that are exposed through JMX beans, complete the following configuration steps: Enable the JMX MBean Server to expose the notification bean. Add jmx to the notification.enabled.channels property in the connector configuration. Connect your preferred JMX client to the MBean Server. Notifications are exposed through the Notifications attribute of a bean with the name debezium. <connector-type> .management.notifications. <server> . The following image shows a notification that reports the start of an incremental snapshot: To discard a notification, call the reset operation on the bean. The notifications are also exposed as a JMX notification with type debezium.notification . To enable an application to listen for the JMX notifications that an MBean emits, subscribe the application to the notifications . 6.5. Sending signals to a Debezium connector The Debezium signaling mechanism provides a way to modify the behavior of a connector, or to trigger a one-time action, such as initiating an ad hoc snapshot of a table. To use signals to trigger a connector to perform a specified action, you can configure the connector to use one or more of the following channels: SourceSignalChannel You can issue a SQL command to add a signal message to a specialized signaling data collection. The signaling data collection, which you create on the source database, is designated exclusively for communicating with Debezium. KafkaSignalChannel You submit signal messages to a configurable Kafka topic. JmxSignalChannel You submit signals through the JMX signal operation. FileSignalChannel You can use a file to send signals. When Debezium detects that a new logging record or ad hoc snapshot record is added to the channel, it reads the signal, and initiates the requested operation. Signaling is available for use with the following Debezium connectors: Db2 MariaDB (Technology Preview) MongoDB MySQL Oracle PostgreSQL SQL Server You can specify which channel is enabled by setting the signal.enabled.channels configuration property. The property lists the names of the channels that are enabled. By default, Debezium provides the following channels: source and kafka . The source channel is enabled by default, because it is required for incremental snapshot signals. 6.5.1. Enabling Debezium source signaling channel By default, the Debezium source signaling channel is enabled. You must explicitly configure signaling for each connector that you want to use it with. Procedure On the source database, create a signaling data collection table for sending signals to the connector. For information about the required structure of the signaling data collection, see Structure of a signaling data collection . For source databases such as Db2 or SQL Server that implement a native change data capture (CDC) mechanism, enable CDC for the signaling table. Add the name of the signaling data collection to the Debezium connector configuration. In the connector configuration, add the property signal.data.collection , and set its value to the fully-qualified name of the signaling data collection that you created in Step 1. For example, signal.data.collection = inventory.debezium_signals . The format for the fully-qualified name of the signaling collection depends on the connector. The following example shows the naming formats to use for each connector: Fully qualified table names Db2 <schemaName> . <tableName> MariaDB (Technology Preview) <databaseName> . <tableName> MongoDB <databaseName> . <collectionName> MySQL <databaseName> . <tableName> Oracle <databaseName> . <schemaName> . <tableName> PostgreSQL <schemaName> . <tableName> SQL Server <databaseName> . <schemaName> . <tableName> For more information about setting the signal.data.collection property, see the table of configuration properties for your connector. 6.5.1.1. Required structure of a Debezium signaling data collection A signaling data collection, or signaling table, stores signals that you send to a connector to trigger a specified operation. The structure of the signaling table must conform to the following standard format. Contains three fields (columns). Fields are arranged in a specific order, as shown in Table 1 . Table 6.7. Required structure of a signaling data collection Field Type Description id (required) string An arbitrary unique string that identifies a signal instance. You assign an id to each signal that you submit to the signaling table. Typically, the ID is a UUID string. You can use signal instances for logging, debugging, or de-duplication. When a signal triggers Debezium to perform an incremental snapshot, it generates a signal message with an arbitrary id string. The id string that the generated message contains is unrelated to the id string in the submitted signal. type (required) string Specifies the type of signal to send. You can use some signal types with any connector for which signaling is available, while other signal types are available for specific connectors only. data (optional) string Specifies JSON-formatted parameters to pass to a signal action. Each signal type requires a specific set of data. Note The field names in a data collection are arbitrary. The preceding table provides suggested names. If you use a different naming convention, ensure that the values in each field are consistent with the expected content. 6.5.1.2. Creating a Debezium signaling data collection You create a signaling table by submitting a standard SQL DDL query to the source database. Prerequisites You have sufficient access privileges to create a table on the source database. Procedure Submit a SQL query to the source database to create a table that is consistent with the required structure , as shown in the following example: CREATE TABLE <tableName> (id VARCHAR( <varcharValue> ) PRIMARY KEY, type VARCHAR( <varcharValue> ) NOT NULL, data VARCHAR( <varcharValue> ) NULL); Note The amount of space that you allocate to the VARCHAR parameter of the id variable must be sufficient to accommodate the size of the ID strings of signals sent to the signaling table. If the size of an ID exceeds the available space, the connector cannot process the signal. The following example shows a CREATE TABLE command that creates a three-column debezium_signal table: CREATE TABLE debezium_signal (id VARCHAR(42) PRIMARY KEY, type VARCHAR(32) NOT NULL, data VARCHAR(2048) NULL); 6.5.2. Enabling the Debezium Kafka signaling channel You can enable the Kafka signaling channel by adding it to the signal.enabled.channels configuration property, and then adding the name of the topic that receives signals to the signal.kafka.topic property. After you enable the signaling channel, a Kafka consumer is created to consume signals that are sent to the configured signal topic. Additional configuration available for the consumer Db2 connector Kafka signal configuration properties MariaDB connector Kafka signal configuration properties MongoDB connector Kafka signal configuration properties MySQL connector Kafka signal configuration properties Oracle connector Kafka signal configuration properties PostgreSQL connector Kafka signal configuration properties SQL Server connector Kafka signal configuration properties Note To use Kafka signaling to trigger ad hoc incremental snapshots for most connectors, you must first enable a source signaling channel in the connector configuration. The source channel implements a watermarking mechanism to deduplicate events that might be captured by an incremental snapshot and then captured again after streaming resumes. Enabling the source channel is not required when using a signaling channel to trigger an incremental snapshot of a read-only MySQL database that has GTIDs enabled . For more information, see MySQL read only incremental snapshot Message format The key of the Kafka message must match the value of the topic.prefix connector configuration option. The value is a JSON object with type and data fields. When the signal type is set to execute-snapshot , the data field must include the fields that are listed in the following table: Table 6.8. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to run. Currently Debezium supports the incremental and blocking types. data-collections N/A An array of comma-separated regular expressions that match the fully qualified names of the data collections to include in the snapshot. The naming format depends on the database. additional-conditions N/A An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot. Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures. You can set the following properties for each additional condition: data-collection The fully-qualified name of the data collection that the filter applies to. You can apply different filters to each data collection. filter Specifies column values that must be present in a database record for the snapshot to include it, for example, "color='blue'" . The snapshot process evaluates records in the data collection against the filter value and captures only records that contain matching values. The specific values that you assign to the filter property depend on the type of ad hoc snapshot: For incremental snapshots, you specify a search condition fragment, such as "color='blue'" , that the snapshot appends to the condition clause of a query. For blocking snapshots, you specify a full SELECT statement, such as the one that you might set in the snapshot.select.statement.overrides property. The following example shows a typical execute-snapshot Kafka message: 6.5.3. Enabling the Debezium JMX signaling channel You can enable the JMX signaling by adding jmx to the signal.enabled.channels property in the connector configuration, and then enabling the JMX MBean Server to expose the signaling bean. Procedure Use your preferred JMX client (for example. JConsole or JDK Mission Control) to connect to the MBean server. Search for the Mbean debezium. <connector-type> .management.signals. <server> . The Mbean exposes signal operations that accept the following input parameters: p0 The id of the signal. p1 The type of the signal, for example, execute-snapshot . p2 A JSON data field that contains additional information about the specified signal type. Send an execute-snapshot signal by providing value for the input parameters. In the JSON data field, include the information that is listed in the following table: Table 6.9. Execute snapshot data fields Field Default Value type incremental The type of the snapshot to run. Currently Debezium supports the incremental and blocking types. data-collections N/A An array of comma-separated regular expressions that match the fully-qualified names of the tables to include in the snapshot. additional-conditions N/A An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot. Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures. You can set the following properties for each additional condition: data-collection The fully-qualified name of the data collection that the filter applies to. You can apply different filters to each data collection. filter Specifies column values that must be present in a database record for the snapshot to include it, for example, "color='blue'" . The snapshot process evaluates records in the data collection against the filter value and captures only records that contain matching values. The specific values that you assign to the filter property depend on the type of ad hoc snapshot: For incremental snapshots, you specify a search condition fragment, such as "color='blue'" , that the snapshot appends to the condition clause of a query. For blocking snapshots, you specify a full SELECT statement, such as the one that you might set in the snapshot.select.statement.overrides property. The following image shows an example of how to use JConsole to send a signal: 6.5.4. Types of Debezium signal actions You can use signaling to initiate the following actions: Add messages to the log . Trigger ad hoc incremental snapshots . Stop execution of an ad hoc snapshot . Pause incremental snapshots . Resume incremental snapshots . Trigger ad hoc blocking snapshot . Custom action . Some signals are not compatible with all connectors. 6.5.4.1. Logging signals You can request a connector to add an entry to the log by creating a signaling table entry with the log signal type. After processing the signal, the connector prints the specified message to the log. Optionally, you can configure the signal so that the resulting message includes the streaming coordinates. Table 6.10. Example of a signaling record for adding a log message Column Value Description id 924e3ff8-2245-43ca-ba77-2af9af02fa07 type log The action type of the signal. data {"message": "Signal message at offset {}"} The message parameter specifies the string to print to the log. If you add a placeholder ( {} ) to the message, it is replaced with streaming coordinates. 6.5.4.2. Ad hoc snapshot signals You can request a connector to initiate an ad hoc snapshot by creating a signal with the execute-snapshot signal type. After processing the signal, the connector runs the requested snapshot operation. Unlike the initial snapshot that a connector runs after it first starts, an ad hoc snapshot occurs during runtime, after the connector has already begun to stream change events from a database. You can initiate ad hoc snapshots at any time. Ad hoc snapshots are available for the following Debezium connectors: Db2 MariaDB (Technology Preview) MongoDB MySQL Oracle PostgreSQL SQL Server Table 6.11. Example of an ad hoc snapshot signal record Column Value id d139b9b7-7777-4547-917d-e1775ea61d41 type execute-snapshot data {"data-collections": ["public.MyFirstTable", "public.MySecondTable"]} Table 6.12. Example of an ad hoc snapshot signal message Key Value test_connector {"type":"execute-snapshot","data": {"data-collections": ["public.MyFirstTable"], "type": "INCREMENTAL", "additional-conditions":[{"data-collection": "public.MyFirstTable", "filter":"color='blue' AND brand='MyBrand'"}]}} For more information about ad hoc snapshots, see the Snapshots topic in the documentation for your connector. Additional resources Db2 connector incremental snapshots MongoDB connector incremental snapshots MySQL connector incremental snapshots Oracle connector incremental snapshots PostgreSQL connector incremental snapshots SQL Server connector incremental snapshots Ad hoc snapshot stop signals You can request a connector to stop an in-progress ad hoc snapshot by creating a signal table entry with the stop-snapshot signal type. After processing the signal, the connector will stop the current in-progress snapshot operation. You can stop ad hoc snapshots for the following Debezium connectors: Db2 MariaDB (Technology Preview) MongoDB MySQL Oracle PostgreSQL SQL Server Table 6.13. Example of a stop ad hoc snapshot signal record Column Value id d139b9b7-7777-4547-917d-e1775ea61d41 type stop-snapshot data {"type":"INCREMENTAL", "data-collections": ["public.MyFirstTable"]} You must specify the type of the signal. The data-collections field is optional. Leave the data-collections field blank to request the connector to stop all activity in the current snapshot. If you want the incremental snapshot to proceed, but you want to exclude specific collections from the snapshot, provide a comma-separated list of the names of the collections or regular expressions to exclude. After the connector processes the signal, the incremental snapshot proceeds, but it excludes data from the collections that you specify. 6.5.4.3. Incremental snapshots Incremental snapshots are a specific type of ad hoc snapshot. In an incremental snapshot, the connector captures the baseline state of the tables that you specify, similar to an initial snapshot. However, unlike an initial snapshot, an incremental snapshot captures tables in chunks, rather than all at once. The connector uses a watermarking method to track the progress of the snapshot. By capturing the initial state of the specified tables in chunks rather than in a single monolithic operation, incremental snapshots provide the following advantages over the initial snapshot process: While the connector captures the baseline state of the specified tables, streaming of near real-time events from the transaction log continues uninterrupted. If the incremental snapshot process is interrupted, it can be resumed from the point at which it stopped. You can initiate an incremental snapshot at any time. Incremental snapshot pause signals You can request a connector to pause an in-progress incremental snapshot by creating a signal table entry with the pause-snapshot signal type. After processing the signal, the connector will stop pause current in-progress snapshot operation. Therefor it's not possible to specify the data collection as the snapshot processing will be paused in position where it is in time of processing of the signal. You can pause incremental snapshots for the following Debezium connectors: Db2 MariaDB (Technology Preview) MongoDB MySQL Oracle PostgreSQL SQL Server Table 6.14. Example of a pause incremental snapshot signal record Column Value id d139b9b7-7777-4547-917d-e1775ea61d41 type pause-snapshot You must specify the type of the signal. The data field is ignored. Incremental snapshot resume signals You can request a connector to resume a paused incremental snapshot by creating a signal table entry with the resume-snapshot signal type. After processing the signal, the connector will resume previously paused snapshot operation. You can resume incremental snapshots for the following Debezium connectors: Db2 MariaDB (Technology Preview) MongoDB MySQL Oracle PostgreSQL SQL Server Table 6.15. Example of a resume incremental snapshot signal record Column Value id d139b9b7-7777-4547-917d-e1775ea61d41 type resume-snapshot You must specify the type of the signal. The data field is ignored. For more information about incremental snapshots, see the Snapshots topic in the documentation for your connector. Additional resources Db2 connector incremental snapshots MongoDB connector incremental snapshots MySQL connector incremental snapshots Oracle connector incremental snapshots PostgreSQL connector incremental snapshots SQL Server connector incremental snapshots 6.5.4.4. Blocking snapshot signals You can request a connector to initiate an ad hoc blocking snapshot by creating a signal with the execute-snapshot signal type and data.type with value blocking . After processing the signal, the connector runs the requested snapshot operation. Unlike the initial snapshot that a connector runs after it first starts, an ad hoc blocking snapshot occurs during runtime, after the connector has stopped to stream change events from a database. You can initiate ad hoc blocking snapshots at any time. Blocking snapshots are available for the following Debezium connectors: Db2 MariaDB (Technology Preview) MySQL Oracle PostgreSQL SQL Server Table 6.16. Example of a blocking snapshot signal record Column Value id d139b9b7-7777-4547-917d-e1775ea61d41 type execute-snapshot data {"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]} Table 6.17. Example of a blocking snapshot signal message Key Value test_connector {"type":"execute-snapshot","data": {"type": "blocking"} For more information about blocking snapshots, see the Snapshots topic in the documentation for your connector. Additional resources Db2 connector ad hoc blocking snapshots MySQL connector ad hoc blocking snapshots Oracle connector ad hoc blocking snapshots PostgreSQL connector ad hoc blocking snapshots SQL Server connector ad hoc blocking snapshots 6.5.4.5. Defining a custom signal action Custom actions enable you to extend the Debezium signaling framework to trigger actions that are not available in the default implementation. You can use a custom action with multiple connectors. To define a custom signal action, you must define the following interface: @FunctionalInterface public interface SignalAction<P extends Partition> { /** * @param signalPayload the content of the signal * @return true if the signal was processed */ boolean arrived(SignalPayload<P> signalPayload) throws InterruptedException; } The io.debezium.pipeline.signal.actions.SignalAction exposes a single method with one parameter, which represents the message payloads sent through the signaling channel. After you define a custom signaling action, use the following SPI interface to make the custom action available to the signaling mechanism: io.debezium.pipeline.signal.actions.SignalActionProvider . public interface SignalActionProvider { /** * Create a map of signal action where the key is the name of the action. * * @param dispatcher the event dispatcher instance * @param connectorConfig the connector config * @return a concrete action */ <P extends Partition> Map<String, SignalAction<P>> createActions(EventDispatcher<P, ? extends DataCollectionId> dispatcher, CommonConnectorConfig connectorConfig); } Your implementation must return a map of the signal action. Set the map key to the name of the action. The key is used as the type of the signal. 6.5.4.6. Debezium core module dependencies A custom actions Java project has compile dependencies on the Debezium core module. Include the following compile dependencies in your project's pom.xml file: <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-core</artifactId> <version>USD{version.debezium}</version> 1 </dependency> 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 USD{version.debezium} represents the version of the Debezium connector. Declare your provider implementation in the META-INF/services/io.debezium.pipeline.signal.actions.SignalActionProvider file. 6.5.4.7. Deploying a custom signal action Prerequisites You have a custom actions Java program. Procedure To use a custom action with a Debezium connector, export the Java project to a JAR file, and copy the file to the directory that contains the JAR file for each Debezium connector that you want to use it with. For example, in a typical deployment, the Debezium connector files are stored in subdirectories of a Kafka Connect directory ( /kafka/connect ), with each connector JAR in its own subdirectory ( /kafka/connect/debezium-connector-db2 , /kafka/connect/debezium-connector-mysql , and so forth). Note To use a custom action with multiple connectors, you must place a copy of the custom signaling channel JAR file in the subdirectory for each connector. | [
"topic.creation.enable = true",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: config: topic.creation.enable: \"false\"",
"metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\"",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector labels: strimzi.io/cluster: my-connect-cluster spec: config: topic.creation.default.replication.factor: 3 1 topic.creation.default.partitions: 10 2 topic.creation.default.cleanup.policy: compact 3 topic.creation.default.compression.type: lz4 4",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector spec: config: ... 1 topic.creation.inventory.include: dbserver1\\\\.inventory\\\\.* 2 topic.creation.inventory.partitions: 20 topic.creation.inventory.cleanup.policy: compact topic.creation.inventory.delete.retention.ms: 7776000000 3 topic.creation.applicationlogs.include: dbserver1\\\\.logs\\\\.applog-.* 4 topic.creation.applicationlogs.exclude\": dbserver1\\\\.logs\\\\.applog-old-.* 5 topic.creation.applicationlogs.replication.factor: 1 topic.creation.applicationlogs.partitions: 20 topic.creation.applicationlogs.cleanup.policy: delete topic.creation.applicationlogs.retention.ms: 7776000000 topic.creation.applicationlogs.compression.type: lz4",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector spec: config: topic.creation.groups: inventory,applicationlogs",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector spec: config: topic.creation.default.replication.factor: 3, topic.creation.default.partitions: 10, topic.creation.default.cleanup.policy: compact topic.creation.default.compression.type: lz4 topic.creation.groups: inventory,applicationlogs topic.creation.inventory.include: dbserver1\\\\.inventory\\\\.* topic.creation.inventory.partitions: 20 topic.creation.inventory.cleanup.policy: compact topic.creation.inventory.delete.retention.ms: 7776000000 topic.creation.applicationlogs.include: dbserver1\\\\.logs\\\\.applog-.* topic.creation.applicationlogs.exclude\": dbserver1\\\\.logs\\\\.applog-old-.* topic.creation.applicationlogs.replication.factor: 1 topic.creation.applicationlogs.partitions: 20 topic.creation.applicationlogs.cleanup.policy: delete topic.creation.applicationlogs.retention.ms: 7776000000 topic.creation.applicationlogs.compression.type: lz4",
"key.converter=io.apicurio.registry.utils.converter.AvroConverter key.converter.apicurio.registry.url=http://apicurio:8080/apis/registry/v2 key.converter.apicurio.registry.auto-register=true key.converter.apicurio.registry.find-latest=true value.converter=io.apicurio.registry.utils.converter.AvroConverter value.converter.apicurio.registry.url=http://apicurio:8080/apis/registry/v2 value.converter.apicurio.registry.auto-register=true value.converter.apicurio.registry.find-latest=true schema.name.adjustment.mode=avro",
"tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb | ├── ├── debezium-connector-mysql │ ├── ├── debezium-connector-postgres │ ├── └── debezium-connector-sqlserver ├──",
"FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # image: debezium-container-with-avro",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 1 config: database.hostname: mysql database.port: 3306 database.user: debezium database.password: dbz database.server.id: 184054 topic.prefix: dbserver1 database.include.list: inventory schema.history.internal.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 schema.history.internal.kafka.topic: schema-changes.inventory schema.name.adjustment.mode: avro key.converter: io.apicurio.registry.utils.converter.AvroConverter key.converter.apicurio.registry.url: http://apicurio:8080/api key.converter.apicurio.registry.global-id: io.apicurio.registry.utils.serde.strategy.GetOrCreateIdStrategy value.converter: io.apicurio.registry.utils.converter.AvroConverter value.converter.apicurio.registry.url: http://apicurio:8080/api value.converter.apicurio.registry.global-id: io.apicurio.registry.utils.serde.strategy.GetOrCreateIdStrategy",
"logs USD(oc get pods -o name -l strimzi.io/name=my-connect-cluster-connect)",
"2020-02-21 17:57:30,801 INFO Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'debezium' with locking mode 'minimal' (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,805 INFO Snapshot is using user 'debezium' with these MySQL grants: (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot]",
"2020-02-21 17:57:30,822 INFO Step 0: disabling autocommit, enabling repeatable read transactions, and setting lock wait timeout to 10 (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,836 INFO Step 1: flush and obtain global read lock to prevent writes to database (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,839 INFO Step 2: start transaction with consistent snapshot (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,840 INFO Step 3: read binlog position of MySQL primary server (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:30,843 INFO using binlog 'mysql-bin.000003' at position '154' and gtid '' (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:34,423 INFO Step 9: committing transaction (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot] 2020-02-21 17:57:34,424 INFO Completed snapshot in 00:00:03.632 (io.debezium.connector.mysql.SnapshotReader) [debezium-mysqlconnector-dbserver1-snapshot]",
"2020-02-21 17:57:35,584 INFO Transitioning from the snapshot reader to the binlog reader (io.debezium.connector.mysql.ChainedReader) [task-thread-inventory-connector-0] 2020-02-21 17:57:35,613 INFO Creating thread debezium-mysqlconnector-dbserver1-binlog-client (io.debezium.util.Threads) [task-thread-inventory-connector-0] 2020-02-21 17:57:35,630 INFO Creating thread debezium-mysqlconnector-dbserver1-binlog-client (io.debezium.util.Threads) [blc-mysql:3306] Feb 21, 2020 5:57:35 PM com.github.shyiko.mysql.binlog.BinaryLogClient connect INFO: Connected to mysql:3306 at mysql-bin.000003/154 (sid:184054, cid:5) 2020-02-21 17:57:35,775 INFO Connected to MySQL binlog at mysql:3306, starting at binlog file 'mysql-bin.000003', pos=154, skipping 0 events plus 0 rows (io.debezium.connector.mysql.BinlogReader) [blc-mysql:3306]",
"{ \"id\" : \"name:test_server;lsn:29274832;txId:565\", 1 \"source\" : \"/debezium/postgresql/test_server\", 2 \"specversion\" : \"1.0\", 3 \"type\" : \"io.debezium.connector.postgresql.DataChangeEvent\", 4 \"time\" : \"2020-01-13T13:55:39.738Z\", 5 \"datacontenttype\" : \"application/json\", 6 \"iodebeziumop\" : \"r\", 7 \"iodebeziumversion\" : \"2.7.3.Final\", 8 \"iodebeziumconnector\" : \"postgresql\", \"iodebeziumname\" : \"test_server\", \"iodebeziumtsms\" : \"1578923739738\", \"iodebeziumsnapshot\" : \"true\", \"iodebeziumdb\" : \"postgres\", \"iodebeziumschema\" : \"s1\", \"iodebeziumtable\" : \"a\", \"iodebeziumlsn\" : \"29274832\", \"iodebeziumxmin\" : null, \"iodebeziumtxid\": \"565\", 9 \"iodebeziumtxtotalorder\": \"1\", \"iodebeziumtxdatacollectionorder\": \"1\", \"data\" : { 10 \"before\" : null, \"after\" : { \"pk\" : 1, \"name\" : \"Bob\" } } }",
"{ \"id\" : \"name:test_server;lsn:33227720;txId:578\", \"source\" : \"/debezium/postgresql/test_server\", \"specversion\" : \"1.0\", \"type\" : \"io.debezium.connector.postgresql.DataChangeEvent\", \"time\" : \"2020-01-13T14:04:18.597Z\", \"datacontenttype\" : \"application/avro\", 1 \"dataschema\" : \"http://my-registry/schemas/ids/1\", 2 \"iodebeziumop\" : \"r\", \"iodebeziumversion\" : \"2.7.3.Final\", \"iodebeziumconnector\" : \"postgresql\", \"iodebeziumname\" : \"test_server\", \"iodebeziumtsms\" : \"1578924258597\", \"iodebeziumsnapshot\" : \"true\", \"iodebeziumdb\" : \"postgres\", \"iodebeziumschema\" : \"s1\", \"iodebeziumtable\" : \"a\", \"iodebeziumtxId\" : \"578\", \"iodebeziumlsn\" : \"33227720\", \"iodebeziumxmin\" : null, \"iodebeziumtxid\": \"578\", \"iodebeziumtxtotalorder\": \"1\", \"iodebeziumtxdatacollectionorder\": \"1\", \"data\" : \"AAAAAAEAAgICAg==\" 3 }",
"\"value.converter\": \"io.debezium.converters.CloudEventsConverter\", \"value.converter.serializer.type\" : \"json\", 1 \"value.converter.data.serializer.type\" : \"avro\", \"value.converter.avro.schema.registry.url\": \"http://my-registry/schemas/ids/1\"",
"\"value,id:generate,type:generate,dataSchemaName:generate\"",
"\"tombstones.on.delete\": false, \"transforms\": \"addMetadataHeaders,outbox\", \"transforms.addMetadataHeaders.type\": \"org.apache.kafka.connect.transforms.HeaderFromUSDValue\", \"transforms.addMetadataHeaders.fields\": \"source,op,transaction\", \"transforms.addMetadataHeaders.headers\": \"source,op,transaction\", \"transforms.addMetadataHeaders.operation\": \"copy\", \"transforms.addMetadataHeaders.predicate\": \"isHeartbeat\", \"transforms.addMetadataHeaders.negate\": true, \"transforms.outbox.type\": \"io.debezium.transforms.outbox.EventRouter\", \"transforms.outbox.table.expand.json.payload\": true, \"transforms.outbox.table.fields.additional.placement\": \"type:header\", \"predicates\": \"isHeartbeat\", \"predicates.isHeartbeat.type\": \"org.apache.kafka.connect.transforms.predicates.TopicNameMatches\", \"predicates.isHeartbeat.pattern\": \"__debezium-heartbeat.*\", \"value.converter\": \"io.debezium.converters.CloudEventsConverter\", \"value.converter.metadata.source\": \"header\", \"header.converter\": \"org.apache.kafka.connect.json.JsonConverter\", \"header.converter.schemas.enable\": true",
"\"value.converter.metadata.source\": \"value,id:header,type:header,dataSchemaName:header\"",
"\"value.converter.metadata.source\": \"value,id:header,type:generate,dataSchemaName:generate\"",
"\"value.converter.metadata.source\": \"header\"",
"{ \"id\": \"5563ae14-49f8-4579-9641-c1bbc2d76f99\", \"aggregate_type\": \"Initial Snapshot\", \"type\": \"COMPLETED\", 1 \"additional_data\" : { \"connector_name\": \"myConnector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"STARTED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"IN_PROGRESS\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collections\":\"table1, table2\", \"current_collection_in_progress\":\"table1\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"TABLE_SCAN_COMPLETED\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collection\":\"table1, table2\", \"scanned_collection\":\"table1\", \"total_rows_scanned\":\"100\", \"status\":\"SUCCEEDED\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"COMPLETED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"ABORTED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f\", \"aggregate_type\":\"Initial Snapshot\", \"type\":\"SKIPPED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"ff81ba59-15ea-42ae-b5d0-4d74f1f4038f\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"STARTED\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collections\":\"table1, table2\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"068d07a5-d16b-4c4a-b95f-8ad061a69d51\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"PAUSED\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collections\":\"table1, table2\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"a9468204-769d-430f-96d2-b0933d4839f3\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"RESUMED\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collections\":\"table1, table2\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"83fb3d6c-190b-4e40-96eb-f8f427bf482c\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"ABORTED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"d02047d6-377f-4a21-a4e9-cb6e817cf744\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"IN_PROGRESS\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collections\":\"table1, table2\", \"current_collection_in_progress\":\"table1\", \"maximum_key\":\"100\", \"last_processed_key\":\"50\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"TABLE_SCAN_COMPLETED\", \"additional_data\":{ \"connector_name\":\"my-connector\", \"data_collection\":\"table1, table2\", \"scanned_collection\":\"table1\", \"total_rows_scanned\":\"100\", \"status\":\"SUCCEEDED\" }, \"timestamp\": \"1695817046353\" }",
"{ \"id\":\"6d82a3ec-ba86-4b36-9168-7423b0dd5c1d\", \"aggregate_type\":\"Incremental Snapshot\", \"type\":\"COMPLETED\", \"additional_data\":{ \"connector_name\":\"my-connector\" }, \"timestamp\": \"1695817046353\" }",
"CREATE TABLE debezium_signal (id VARCHAR(42) PRIMARY KEY, type VARCHAR(32) NOT NULL, data VARCHAR(2048) NULL);",
"Key = `test_connector` Value = `{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"type\": \"INCREMENTAL\"}}`",
"{\"message\": \"Signal message at offset {}\"}",
"{\"data-collections\": [\"public.MyFirstTable\", \"public.MySecondTable\"]}",
"{\"type\":\"execute-snapshot\",\"data\": {\"data-collections\": [\"public.MyFirstTable\"], \"type\": \"INCREMENTAL\", \"additional-conditions\":[{\"data-collection\": \"public.MyFirstTable\", \"filter\":\"color='blue' AND brand='MyBrand'\"}]}}",
"{\"type\":\"INCREMENTAL\", \"data-collections\": [\"public.MyFirstTable\"]}",
"{\"type\": \"blocking\", \"data-collections\": [\"schema1.table1\", \"schema1.table2\"], \"additional-conditions\": [{\"data-collection\": \"schema1.table1\", \"filter\": \"SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC\"}, {\"data-collection\": \"schema1.table2\", \"filter\": \"SELECT * FROM [schema1].[table2] WHERE column2 > 0\"}]}",
"{\"type\":\"execute-snapshot\",\"data\": {\"type\": \"blocking\"}",
"@FunctionalInterface public interface SignalAction<P extends Partition> { /** * @param signalPayload the content of the signal * @return true if the signal was processed */ boolean arrived(SignalPayload<P> signalPayload) throws InterruptedException; }",
"public interface SignalActionProvider { /** * Create a map of signal action where the key is the name of the action. * * @param dispatcher the event dispatcher instance * @param connectorConfig the connector config * @return a concrete action */ <P extends Partition> Map<String, SignalAction<P>> createActions(EventDispatcher<P, ? extends DataCollectionId> dispatcher, CommonConnectorConfig connectorConfig); }",
"<dependency> <groupId>io.debezium</groupId> <artifactId>debezium-core</artifactId> <version>USD{version.debezium}</version> 1 </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/configuring-debezium-connectors-for-your-application |
Chapter 5. Configuring the JAVA_HOME environment variable on RHEL | Chapter 5. Configuring the JAVA_HOME environment variable on RHEL Some applications require you to set the JAVA_HOME environment variable so that they can find the Red Hat build of OpenJDK installation. Prerequisites You know where you installed Red Hat build of OpenJDK on your system. For example, /opt/jdk/11 . Procedure Set the value of JAVA_HOME . Verify that JAVA_HOME is set correctly. Note You can make the value of JAVA_HOME persistent by exporting the environment variable in ~/.bashrc for single users or /etc/bashrc for system-wide settings. Persistent means that if you close your terminal or reboot your computer, you do not need to reset a value for the JAVA_HOME environment variable. The following example demonstrates using a text editor to enter commands for exporting JAVA_HOME in ~/.bashrc for a single user: Additional resources Be aware of the exact meaning of JAVA_HOME . For more information, see Changes/Decouple system java setting from java command setting . | [
"export JAVA_HOME=/opt/jdk/11",
"printenv | grep JAVA_HOME JAVA_HOME=/opt/jdk/11",
"> vi ~/.bash_profile export JAVA_HOME=/opt/jdk/11 export PATH=\"USDJAVA_HOME/bin:USDPATH\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/configuring_red_hat_build_of_openjdk_11_on_rhel/configuring-javahome-environment-variable-on-rhel |
17.4.4. Problems with the X Server Crashing and Non-Root Users | 17.4.4. Problems with the X Server Crashing and Non-Root Users If you are having trouble with the X server crashing when anyone logs in, you may have a full file system (or, a lack of available hard drive space). To verify that this is the problem you are experiencing, run the following command: The df command should help you diagnose which partition is full. For additional information about df and an explanation of the options available (such as the -h option used in this example), refer to the df man page by typing man df at a shell prompt. A key indicator is 100% full or a percentage above 90% or 95% on a partition. The /home/ and /tmp/ partitions can sometimes fill up quickly with user files. You can make some room on that partition by removing old files. After you free up some disk space, try running X as the user that was unsuccessful before. | [
"df -h"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch17s04s04 |
Chapter 1. Installing the Directory Server packages | Chapter 1. Installing the Directory Server packages This chapter contains information about installing the Red Hat Directory Server packages. Prerequisites Red Hat Enterprise Linux (RHEL) is installed on the server. For details about the RHEL version required by the Red Hat Directory Server version you want to install, see the Red Hat Directory Server 11 Release Notes . The system Directory Server is registered to the Red Hat subscription management service. For details about using Subscription Manager ,see the corresponding section in the Using and Configuring Subscription Manager guide. A valid Red Hat Directory Server subscription is available in your Red Hat account. The RHEL default repositories, BaseOS and AppStream , are enabled. 1.1. Installing the Directory Server packages Use the following procedure to install the Directory Server packages. Procedure If your account has disabled Simple Content Access (SCA): List the available subscriptions in your Red Hat account and identify the pool ID that provides Red Hat Directory Server. For example: Attach the Red Hat Directory Server subscription to the system using the pool ID from the step: Enable the Directory Server packages repository. For example, to enable the Directory Server 11.9 repository, run: Install the redhat-ds:11 module: This command automatically installs all required dependencies. Additional resources For details about installing Red Hat Enterprise Linux and registering the system to the Red Hat Subscription Management service, see Performing a standard RHEL 8 installation . For further details about using the subscription-manager utility, see the Using Red Hat Subscription Manager . For information how to check your status of SCA, see Simple Content Access . For details about available Directory Server repositories, see What are the names of the Red Hat repositories that have to be enabled . | [
"subscription-manager list --all --available --matches 'Red Hat Directory Server' Subscription Name: Example Subscription Provides: Red Hat Directory Server Pool ID: 5ab6a8df96b03fd30aba9a9c58da57a1 Available: 1",
"subscription-manager attach --pool= 5ab6a8df96b03fd30aba9a9c58da57a1 Successfully attached a subscription for: Example Subscription",
"subscription-manager repos --enable=dirsrv-11.9-for-rhel-8-x86_64-rpms Repository 'dirsrv-11.9-for-rhel-8-x86_64-rpms' is enabled for this system.",
"yum module install redhat-ds:11"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/assembly_installing-the-directory-server-packages_installation-guide |
2.3. Installing the Minimum Amount of Packages Required | 2.3. Installing the Minimum Amount of Packages Required It is best practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. If you are installing from the DVD media, take the opportunity to select exactly what packages you want to install during the installation. If you find you need another package, you can always add it to the system later. For more information about installing the Minimal install environment, see the Software Selection chapter of the Red Hat Enterprise Linux 7 Installation Guide. A minimal installation can also be performed by a Kickstart file using the --nobase option. For more information about Kickstart installations, see the Package Selection section from the Red Hat Enterprise Linux 7 Installation Guide. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Installing_the_Minimum_Amount_of_Packages_Required |
Chapter 6. Managing templates | Chapter 6. Managing templates A template is a form composed of different UI fields that is defined in a YAML file. Templates include actions , which are steps that are executed in sequential order and can be executed conditionally. You can use templates to easily create Red Hat Developer Hub components, and then publish these components to different locations, such as the Red Hat Developer Hub software catalog, or repositories in GitHub or GitLab. 6.1. Creating a template by using the Template Editor You can create a template by using the Template Editor. Procedure Access the Template Editor by using one of the following options: Open the URL https://<rhdh_url>/create/edit for your Red Hat Developer Hub instance. Click Create... in the navigation menu of the Red Hat Developer Hub console, then click the overflow menu button and select Template editor . Click Edit Template Form . Optional: Modify the YAML definition for the parameters of your template. For more information about these parameters, see Section 6.2, "Creating a template as a YAML file" . In the Name * field, enter a unique name for your template. From the Owner drop-down menu, choose an owner for the template. Click . In the Repository Location view, enter the following information about the hosted repository that you want to publish the template to: Select an available Host from the drop-down menu. Note Available hosts are defined in the YAML parameters by the allowedHosts field: Example YAML # ... ui:options: allowedHosts: - github.com # ... In the Owner * field, enter an organization, user or project that the hosted repository belongs to. In the Repository * field, enter the name of the hosted repository. Click Review . Review the information for accuracy, then click Create . Verification Click the Catalog tab in the navigation panel. In the Kind drop-down menu, select Template . Confirm that your template is shown in the list of existing templates. 6.2. Creating a template as a YAML file You can create a template by defining a Template object as a YAML file. The Template object describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service. Template object example apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: template-name 1 title: Example template 2 description: An example template for v1beta3 scaffolder. 3 spec: owner: backstage/techdocs-core 4 type: service 5 parameters: 6 - title: Fill in some steps required: - name properties: name: title: Name type: string description: Unique name of the component owner: title: Owner type: string description: Owner of the component - title: Choose a location required: - repoUrl properties: repoUrl: title: Repository Location type: string steps: 7 - id: fetch-base name: Fetch Base action: fetch:template # ... output: 8 links: - title: Repository 9 url: USD{{ steps['publish'].output.remoteUrl }} - title: Open in catalog 10 icon: catalog entityRef: USD{{ steps['register'].output.entityRef }} # ... 1 Specify a name for the template. 2 Specify a title for the template. This is the title that is visible on the template tile in the Create... view. 3 Specify a description for the template. This is the description that is visible on the template tile in the Create... view. 4 Specify the ownership of the template. The owner field provides information about who is responsible for maintaining or overseeing the template within the system or organization. In the provided example, the owner field is set to backstage/techdocs-core . This means that this template belongs to the techdocs-core project in the backstage namespace. 5 Specify the component type. Any string value is accepted for this required field, but your organization should establish a proper taxonomy for these. Red Hat Developer Hub instances may read this field and behave differently depending on its value. For example, a website type component may present tooling in the Red Hat Developer Hub interface that is specific to just websites. The following values are common for this field: service A backend service, typically exposing an API. website A website. library A software library, such as an npm module or a Java library. 6 Use the parameters section to specify parameters for user input that are shown in a form view when a user creates a component by using the template in the Red Hat Developer Hub console. Each parameters subsection, defined by a title and properties, creates a new form page with that definition. 7 Use the steps section to specify steps that are executed in the backend. These steps must be defined by using a unique step ID, a name, and an action. You can view actions that are available on your Red Hat Developer Hub instance by visiting the URL https://<rhdh_url>/create/actions . 8 Use the output section to specify the structure of output data that is created when the template is used. The output section, particularly the links subsection, provides valuable references and URLs that users can utilize to access and interact with components that are created from the template. 9 Provides a reference or URL to the repository associated with the generated component. 10 Provides a reference or URL that allows users to open the generated component in a catalog or directory where various components are listed. Additional resources Backstage documentation - Writing Templates Backstage documentation - Builtin actions Backstage documentation - Writing Custom Actions 6.3. Importing an existing template to Red Hat Developer Hub You can add an existing template to your Red Hat Developer Hub instance by using the Catalog Processor. Prerequisites You have created a directory or repository that contains at least one template YAML file. If you want to use a template that is stored in a repository such as GitHub or GitLab, you must configure a Red Hat Developer Hub integration for your provider. Procedure In the app-config.yaml configuration file, modify the catalog.rules section to include a rule for templates, and configure the catalog.locations section to point to the template that you want to add, as shown in the following example: # ... catalog: rules: - allow: [Template] 1 locations: - type: url 2 target: https://<repository_url>/example-template.yaml 3 # ... 1 To allow new templates to be added to the catalog, you must add a Template rule. 2 If you are importing templates from a repository, such as GitHub or GitLab, use the url type. 3 Specify the URL for the template. Verification Click the Catalog tab in the navigation panel. In the Kind drop-down menu, select Template . Confirm that your template is shown in the list of existing templates. Additional resources Enabling the GitHub authentication provider | [
"ui:options: allowedHosts: - github.com",
"apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: template-name 1 title: Example template 2 description: An example template for v1beta3 scaffolder. 3 spec: owner: backstage/techdocs-core 4 type: service 5 parameters: 6 - title: Fill in some steps required: - name properties: name: title: Name type: string description: Unique name of the component owner: title: Owner type: string description: Owner of the component - title: Choose a location required: - repoUrl properties: repoUrl: title: Repository Location type: string steps: 7 - id: fetch-base name: Fetch Base action: fetch:template # output: 8 links: - title: Repository 9 url: USD{{ steps['publish'].output.remoteUrl }} - title: Open in catalog 10 icon: catalog entityRef: USD{{ steps['register'].output.entityRef }}",
"catalog: rules: - allow: [Template] 1 locations: - type: url 2 target: https://<repository_url>/example-template.yaml 3"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/administration_guide_for_red_hat_developer_hub/assembly-admin-templates |
Chapter 6. Upgrading AMQ Interconnect | Chapter 6. Upgrading AMQ Interconnect You should upgrade AMQ Interconnect to the latest version to ensure that you have the latest enhancements and fixes. The upgrade process involves installing the new AMQ Interconnect packages and restarting your routers. You can use these instructions to upgrade AMQ Interconnect to a new minor release or maintenance release . Minor Release AMQ Interconnect periodically provides point releases, which are minor updates that include new features, as well as bug and security fixes. If you plan to upgrade from one AMQ Interconnect point release to another, for example, from AMQ Interconnect 1.0 to AMQ Interconnect 1.1, code changes should not be required for applications that do not use private, unsupported, or technical preview components. Maintenance Release AMQ Interconnect also periodically provides maintenance releases that contain bug fixes. Maintenance releases increment the minor release version by the last digit, for example from 1.0.0 to 1.0.1. A maintenance release should not require code changes; however, some maintenance releases might require configuration changes. Prerequisites Before performing an upgrade, you should have reviewed the release notes for the target release to ensure that you understand the new features, enhancements, fixes, and issues. To find the release notes for the target release, see the Red Hat Customer Portal . Procedure Upgrade the qpid-dispatch-router and qpid-dispatch-tools packages and their dependencies: USD sudo yum update qpid-dispatch-router qpid-dispatch-tools For more information, see Chapter 5, Installing AMQ Interconnect . Restart each router in your router network. To avoid disruption, you should restart each router one at a time. This example restarts a router in Red Hat Enterprise Linux 7: USD systemctl restart qdrouterd.service For more information about starting a router, see Section 5.3, "Starting a router" . | [
"sudo yum update qpid-dispatch-router qpid-dispatch-tools",
"systemctl restart qdrouterd.service"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/upgrading_amq_interconnect |
Chapter 4. Running Containers as systemd Services with Podman | Chapter 4. Running Containers as systemd Services with Podman Podman (Pod Manager) is a fully featured container engine that is a simple daemonless tool. Podman provides a Docker-CLI comparable command line that eases the transition from other container engines and allows the management of pods, containers and images. It was not originally designed to bring up an entire Linux system or manage services for such things as start-up order, dependency checking, and failed service recovery. That is the job of a full-blown initialization system like systemd. Red Hat has become a leader in integrating containers with systemd, so that OCI and Docker-formatted containers built by Podman can be managed in the same way that other services and features are managed in a Linux system. This chapter describes how you can use the systemd initialization service to work with containers in two different ways: Starting Containers with systemd : By setting up a systemd unit file on your host computer, you can have the host automatically start, stop, check the status, and otherwise manage a container as a systemd service. Starting services within a container using systemd : Many Linux services (Web servers, file servers, database servers, and so on) are already packaged for Red Hat Enterprise Linux to run as systemd services. If you are using the latest RHEL container image, you can set the RHEL container image to start the systemd service, then automatically start selected services within the container when the container starts up. The following two sections describe how to use systemd container in those ways. 4.1. Starting Containers with systemd When you set up a container to start as a systemd service, you can define the order in which the containerized service runs, check for dependencies (like making sure another service is running, a file is available or a resource is mounted), and even have a container start by using the runc command. This section provides an example of a container that is configured to run directly on a RHEL or RHEL Atomic Host system as a systemd service. Get the image you want to run on your system. For example, to use the redis service from docker.io, run the following command: Open Selinux permission. If SELinux is enabled on your system, you must turn on the container_manage_cgroup boolean to run containers with systemd as shown here (see the Containers running systemd solution for details): Run the image as a container, giving it a name you want to use in the systemd service file. For example, to name the running redis container redis_server, type the following: Configure the container as a systemd service by creating the unit configuration file in the /etc/systemd/system/ directory. For example, the contents of the /etc/systemd/system/redis-container.service can look as follows (note that redis_server matches the name you set on the podman run line): After creating the unit file, to start the container automatically at boot time, type the following: Once the service is enabled, it will start at boot time. To start it immediately and check the status of the service, type the following: To learn more about configuring services with systemd, refer to the System Administrator's Guide chapter called Managing Services with systemd . 4.2. Starting services within a container using systemd A package with the systemd initialization system is included in the official Red Hat Enterprise Linux Init base image named rhel7-init . This means that applications created to be managed with systemd can be started and managed inside a container. A container running systemd will: Note Previously, a modified version of the systemd initialization system called systemd-container was included in the Red Hat Enterprise Linux versions 7.2 base images. Now, the systemd package is the same across systems. Start the /sbin/init process (the systemd service) to run as PID 1 within the container. Start all systemd services that are installed and enabled within the container, in order of dependencies. Allow systemd to restart services or kill zombie processes for services started within the container. The general steps for building a container that is ready to be used as a systemd services is: Install the package containing the systemd-enabled service inside the container. This can include dozens of services that come with RHEL, such as Apache Web Server (httpd), FTP server (vsftpd), Proxy server (squid), and many others. For this example, we simply install an Apache (httpd) Web server. Use the systemctl command to enable the service inside the container. Add data for the service to use in the container (in this example, we add a Web server test page). For a real deployment, you would probably connect to outside storage. Expose any ports needed to access the service. Set /sbin/init as the default process to start when the container runs In this example, we build a container by creating a Dockerfile that installs and configures a Web server (httpd) to start automatically by the systemd service (/sbin/init) when the container is run on a host system. Create Dockerfile : In a separate directory, create a file named Dockerfile with the following contents: The Dockerfile installs the httpd package, enables the httpd service to start at boot time (i.e. when the container starts), creates a test file (index.html), exposes the Web server to the host (port 80), and starts the systemd init service (/sbin/init) when the container starts. Build the container : From the directory containing the Dockerfile, type the following: Open Selinux permission . If SELinux is enabled on your system, you must turn on the container_manage_cgroup boolean to run containers with systemd as shown here (see the Containers running systemd solution for details): Run the container : Once the container is built and named mysysd, type the following to run the container: From this command, the mysysd image runs as the mysysd_run container as a daemon process, with port 80 from the container exposed to port 80 on the host system. Check that the container is running : To make sure that the container is running and that the service is working, type the following commands: At this point, you have a container that starts up a Web server as a systemd service inside the container. Install and run any services you like in this same way by modifying the Dockerfile and configuring data and opening ports as appropriate. | [
"podman pull docker.io/redis",
"setsebool -P container_manage_cgroup on",
"podman run -d --name redis_server -p 6379:6379 redis",
"[Unit] Description=Redis container [Service] Restart=always ExecStart=/usr/bin/podman start -a redis_server ExecStop=/usr/bin/podman stop -t 2 redis_server [Install] WantedBy=local.target",
"systemctl enable redis-container.service",
"systemctl start redis-container.service systemctl status redis-container.service * redis-container.service - Redis container Loaded: loaded (/etc/systemd/system/redis-container.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-03-15 16:22:55 EDT; 6s ago Main PID: 1540 (podman) Tasks: 8 (limit: 2353) Memory: 7.7M CGroup: /system.slice/redis-container.service └─1540 /usr/bin/podman start -a redis_server Mar 15 16:22:55 localhost.localdomain systemd[1]: Started Redis container.",
"FROM rhel7-init RUN yum -y install httpd; yum clean all; systemctl enable httpd; RUN echo \"Successful Web Server Test\" > /var/www/html/index.html RUN mkdir /etc/systemd/system/httpd.service.d/; echo -e '[Service]\\nRestart=always' > /etc/systemd/system/httpd.service.d/httpd.conf EXPOSE 80 CMD [ \"/sbin/init\" ]",
"podman build --format=docker -t mysysd .",
"setsebool -P container_manage_cgroup 1",
"podman run -d --name=mysysd_run -p 80:80 mysysd",
"podman ps | grep mysysd_run a282b0c2ad3d localhost/mysysd:latest /sbin/init 15 seconds ago Up 14 seconds ago 0.0.0.0:80->80/tcp mysysd_run curl localhost/index.html Successful Web Server Test"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_containers_as_systemd_services_with_podman |
Remediation, fencing, and maintenance | Remediation, fencing, and maintenance Workload Availability for Red Hat OpenShift 24.3 Workload Availability remediation, fencing, and maintenance Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.3/html/remediation_fencing_and_maintenance/index |
23.18. Storage Pools | 23.18. Storage Pools Although all storage pool back-ends share the same public APIs and XML format, they have varying levels of capabilities. Some may allow creation of volumes, others may only allow use of pre-existing volumes. Some may have constraints on volume size, or placement. The top level element for a storage pool document is <pool> . It has a single attribute type , which can take the following values: dir, fs, netfs, disk, iscsi, logical, scsi, mpath, rbd, sheepdog , or gluster . 23.18.1. Providing Metadata for the Storage Pool The following XML example, shows the metadata tags that can be added to a storage pool. In this example, the pool is an iSCSI storage pool. <pool type="iscsi"> <name>virtimages</name> <uuid>3e3fce45-4f53-4fa7-bb32-11f34168b82b</uuid> <allocation>10000000</allocation> <capacity>50000000</capacity> <available>40000000</available> ... </pool> Figure 23.79. General metadata tags The elements that are used in this example are explained in the Table 23.27, " virt-sysprep commands" . Table 23.27. virt-sysprep commands Element Description <name> Provides a name for the storage pool which must be unique to the host physical machine. This is mandatory when defining a storage pool. <uuid> Provides an identifier for the storage pool which must be globally unique. Although supplying the UUID is optional, if the UUID is not provided at the time the storage pool is created, a UUID will be automatically generated. <allocation> Provides the total storage allocation for the storage pool. This may be larger than the sum of the total allocation across all storage volumes due to the metadata overhead. This value is expressed in bytes. This element is read-only and the value should not be changed. <capacity> Provides the total storage capacity for the pool. Due to underlying device constraints, it may not be possible to use the full capacity for storage volumes. This value is in bytes. This element is read-only and the value should not be changed. <available> Provides the free space available for allocating new storage volumes in the storage pool. Due to underlying device constraints, it may not be possible to allocate the all of the free space to a single storage volume. This value is in bytes. This element is read-only and the value should not be changed. 23.18.2. Source Elements Within the <pool> element there can be a single <source> element defined (only one). The child elements of <source> depend on the storage pool type. Some examples of the XML that can be used are as follows: ... <source> <host name="iscsi.example.com"/> <device path="demo-target"/> <auth type='chap' username='myname'> <secret type='iscsi' usage='mycluster_myname'/> </auth> <vendor name="Acme"/> <product name="model"/> </source> ... Figure 23.80. Source element option 1 ... <source> <adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/> </source> ... Figure 23.81. Source element option 2 The child elements that are accepted by <source> are explained in Table 23.28, "Source child elements commands" . Table 23.28. Source child elements commands Element Description <device> Provides the source for storage pools backed by host physical machine devices (based on <pool type=> (as shown in Section 23.18, "Storage Pools" )). May be repeated multiple times depending on back-end driver. Contains a single attribute path which is the fully qualified path to the block device node. <dir> Provides the source for storage pools backed by directories ( <pool type='dir'> ), or optionally to select a subdirectory within a storage pool that is based on a filesystem ( <pool type='gluster'> ). This element may only occur once per ( <pool> ). This element accepts a single attribute ( <path> ) which is the full path to the backing directory. <adapter> Provides the source for storage pools backed by SCSI adapters ( <pool type='scsi'> ). This element may only occur once per ( <pool> ). Attribute name is the SCSI adapter name (ex. "scsi_host1". Although "host1" is still supported for backwards compatibility, it is not recommended. Attribute type specifies the adapter type. Valid values are 'fc_host'| 'scsi_host' . If omitted and the name attribute is specified, then it defaults to type='scsi_host' . To keep backwards compatibility, the attribute type is optional for the type='scsi_host' adapter, but mandatory for the type='fc_host' adapter. Attributes wwnn (Word Wide Node Name) and wwpn (Word Wide Port Name) are used by the type='fc_host' adapter to uniquely identify the device in the Fibre Channel storage fabric (the device can be either a HBA or vHBA). Both wwnn and wwpn should be specified. For instructions on how to get wwnn/wwpn of a (v)HBA, see Section 20.27.11, "Collect Device Configuration Settings" . The optional attribute parent specifies the parent device for the type='fc_host' adapter. <host> Provides the source for storage pools backed by storage from a remote server ( type='netfs'|'iscsi'|'rbd'|'sheepdog'|'gluster' ). This element should be used in combination with a <directory> or <device> element. Contains an attribute name which is the host name or IP address of the server. May optionally contain a port attribute for the protocol specific port number. <auth> If present, the <auth> element provides the authentication credentials needed to access the source by the setting of the type attribute (pool type='iscsi'|'rbd' ). The type must be either type='chap' or type='ceph' . Use "ceph" for Ceph RBD (Rados Block Device) network sources and use "iscsi" for CHAP (Challenge-Handshake Authentication Protocol) iSCSI targets. Additionally a mandatory attribute username identifies the user name to use during authentication as well as a sub-element secret with a mandatory attribute type, to tie back to a libvirt secret object that holds the actual password or other credentials. The domain XML intentionally does not expose the password, only the reference to the object that manages the password. The secret element requires either a uuid attribute with the UUID of the secret object or a usage attribute matching the key that was specified in the secret object. <name> Provides the source for storage pools backed by a storage device from a named element <type> which can take the values: ( type='logical'|'rbd'|'sheepdog','gluster' ). <format> Provides information about the format of the storage pool <type> which can take the following values: type='logical'|'disk'|'fs'|'netfs' ). Note that this value is back-end specific. This is typically used to indicate a filesystem type, or a network filesystem type, or a partition table type, or an LVM metadata type. As all drivers are required to have a default value for this, the element is optional. <vendor> Provides optional information about the vendor of the storage device. This contains a single attribute <name> whose value is back-end specific. <product> Provides optional information about the product name of the storage device. This contains a single attribute <name> whose value is back-end specific. 23.18.3. Creating Target Elements A single <target> element is contained within the top level <pool> element for the following types: ( type='dir'|'fs'|'netfs'|'logical'|'disk'|'iscsi'|'scsi'|'mpath' ). This tag is used to describe the mapping of the storage pool into the host filesystem. It can contain the following child elements: <pool> ... <target> <path>/dev/disk/by-path</path> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <timestamps> <atime>1341933637.273190990</atime> <mtime>1341930622.047245868</mtime> <ctime>1341930622.047245868</ctime> </timestamps> <encryption type='...'> ... </encryption> </target> </pool> Figure 23.82. Target elements XML example The table ( Table 23.29, "Target child elements" ) explains the child elements that are valid for the parent <target> element: Table 23.29. Target child elements Element Description <path> Provides the location at which the storage pool will be mapped into the local filesystem namespace. For a filesystem or directory-based storage pool it will be the name of the directory in which storage volumes will be created. For device-based storage pools it will be the name of the directory in which the device's nodes exist. For the latter, /dev/ may seem like the logical choice, however, the device's nodes there are not guaranteed to be stable across reboots, since they are allocated on demand. It is preferable to use a stable location such as one of the /dev/disk/by-{path,id,uuid,label} locations. <permissions> This is currently only useful for directory- or filesystem-based storage pools, which are mapped as a directory into the local filesystem namespace. It provides information about the permissions to use for the final directory when the storage pool is built. The <mode> element contains the octal permission set. The <owner> element contains the numeric user ID. The <group> element contains the numeric group ID. The <label> element contains the MAC (for example, SELinux) label string. <timestamps> Provides timing information about the storage volume. Up to four sub-elements are present, where timestamps='atime'|'btime|'ctime'|'mtime' holds the access, birth, change, and modification time of the storage volume, where known. The used time format is <seconds> . <nanoseconds> since the beginning of the epoch (1 Jan 1970). If nanosecond resolution is 0 or otherwise unsupported by the host operating system or filesystem, then the nanoseconds part is omitted. This is a read-only attribute and is ignored when creating a storage volume. <encryption> If present, specifies how the storage volume is encrypted. For more information, see libvirt upstream pages . 23.18.4. Setting Device Extents If a storage pool exposes information about its underlying placement or allocation scheme, the <device> element within the <source> element may contain information about its available extents. Some storage pools have a constraint that a storage volume must be allocated entirely within a single constraint (such as disk partition pools). Thus, the extent information allows an application to determine the maximum possible size for a new storage volume. For storage pools supporting extent information, within each <device> element there will be zero or more <freeExtent> elements. Each of these elements contains two attributes, <start> and <end> which provide the boundaries of the extent on the device, measured in bytes. | [
"<pool type=\"iscsi\"> <name>virtimages</name> <uuid>3e3fce45-4f53-4fa7-bb32-11f34168b82b</uuid> <allocation>10000000</allocation> <capacity>50000000</capacity> <available>40000000</available> </pool>",
"<source> <host name=\"iscsi.example.com\"/> <device path=\"demo-target\"/> <auth type='chap' username='myname'> <secret type='iscsi' usage='mycluster_myname'/> </auth> <vendor name=\"Acme\"/> <product name=\"model\"/> </source>",
"<source> <adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/> </source>",
"<pool> <target> <path>/dev/disk/by-path</path> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> <timestamps> <atime>1341933637.273190990</atime> <mtime>1341930622.047245868</mtime> <ctime>1341930622.047245868</ctime> </timestamps> <encryption type='...'> </encryption> </target> </pool>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Storage_pools |
8.73. gnome-session | 8.73. gnome-session 8.73.1. RHBA-2014:1585 - gnome-session bug fix and enhancement update Updated gnome-session packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The gnome-session packages manage the GNOME desktop session. It starts up the other core components of GNOME and handles logout and saving of the session. Bug Fixes BZ# 684767 Due to insufficient checking, the "Switch User" button appeared in the logout dialog window even when user switching was disabled by the lock down configuration. A patch has been provided to fix this bug, and the "Switch User" button is now removed when the user logs out. BZ# 785828 , BZ# 1069503 Due to incorrect clean up of resources at shutdown, the "Startup Applications" GUI did not submit changes immediately. If the user closed the dialog window earlier than 2 seconds after making a change, the change failed to be committed. To fix this bug, the dialog window on shutdown has been deleted, so that its dispose handler commits pending changes immediately. As a result, the user can enable or disable additional startup programs and quickly close the dialog window without the risk of losing changes. BZ# 982423 Prior to this update, there were inadequate checks in the gnome-session utility for a preexisting gnome-session instance. Consequently, running gnome-session within a GNOME session started a nested broken session. With this update, a check for the SESSION_MANAGER environment variable has been added. As a result, if the user runs gnome-session within a preexisting session by mistake, an error message is returned. In addition, this update adds the following Enhancement BZ# 786573 Previously, if the user accidentally clicked "Remember Running Applications" and was using the custom session selector, they could not proceed without saving. This update provides the close button for the session selector to enable the user to refrain from saving. Users of gnome-session are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/gnome-session |
Chapter 2. Updating a model | Chapter 2. Updating a model Red Hat Enterprise Linux AI allows you to upgrade LLMs that you locally downloaded to the latest version of the model. Table 2.1. RHEL AI version 1.3 LLM support Feature Description Version introduced NVIDIA AMD granite-7b-starter Granite student model for customizing and fine-tuning RHEL AI 1.1 Deprecated Deprecated granite-7b-redhat-lab Granite model for inference serving RHEL AI 1.1 Deprecated Deprecated granite-8b-starter Granite student model for customizing and fine-tuning RHEL AI 1.3 Generally available Technology preview granite-8b-redhat-lab Granite model for inference serving RHEL AI 1.3 Generally available Technology preview granite-8b-lab-v2-preview Preview of the version 2 8b Granite model for inference serving RHEL AI 1.3 Technology preview Technology preview 2.1. Updating the models You can upgrade your local models to the latest version of the model using the RHEL AI tool set. Prerequisites You installed the InstructLab tools with the bootable container image. You initialized InstructLab and can use the ilab CLI. You downloaded LLMs on Red Hat Enterprise Linux AI. You created a Red Hat registry account and logged in on your machine. Procedure You can upgrade any model by running the following command. USD ilab model download --repository <repository_and_model> --release latest where: <repository_and_model> Specifies the repository location of the model as well as the model. You can access the models from the registry.redhat.io/rhelai1/ repository. <release> Specifies the version of the model. Set to latest , or a specific version of the model, for the most up to date version of the model. Verification You can view all the downloaded models on your system with the following command: USD ilab model list | [
"ilab model download --repository <repository_and_model> --release latest",
"ilab model list"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/updating/updating_a_model |
Chapter 1. Installation | Chapter 1. Installation FCoE Support in the Kickstart File When using a kickstart file to install Red Hat Enterprise Linux 6.4, with the new fcoe kickstart option you can specify which Fibre Channel over Ethernet (FCoE) devices should be activated automatically in addition to those discovered by Enhanced Disk Drive (EDD) services. For more information, refer to the Kickstart Options section in the Red Hat Enterprise Linux 6 Installation Guide . Installation over VLAN In Red Hat Enterprise Linux 6.4, the vlanid= boot option and the --vlanid= kickstart option allow you to set a virtual LAN ID (802.1q tag) for a specified network device. By specifying either one of these options, installation of the system can be done over a VLAN. Configuring Bonding The bond boot option and the --bondslaves and --bondopts kickstart options can now be used to configure bonding as a part of the installation process. For more information on how to configure bonding, refer to the following parts of the Red Hat Enterprise Linux 6 Installation Guide : section Kickstart Options and chapter Boot Options . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/chap-installation |
5.2. Removing a Cluster from the luci Interface | 5.2. Removing a Cluster from the luci Interface You can remove a cluster from the luci management GUI without affecting the cluster services or cluster membership. If you remove a cluster, you can later add the cluster back, or you can add it to another luci instance, as described in Section 5.1, "Adding an Existing Cluster to the luci Interface" . To remove a cluster from the luci management GUI without affecting the cluster services or cluster membership, follow these steps: Click Manage Clusters from the menu on the left side of the luci Homebase page. The Clusters screen appears. Select the cluster or clusters you wish to remove. Click Remove . The system will ask you to confirm whether to remove the cluster from the luci management GUI. For information on deleting a cluster entirely, stopping all cluster services removing the cluster configuration information from the nodes themselves, see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-removeclust-conga-ca |
7.5. Set an IP Address to Run Red Hat JBoss Data Grid | 7.5. Set an IP Address to Run Red Hat JBoss Data Grid For production use, the Red Hat JBoss Data Grid server must be bound to a specified IP address rather than to 127.0.0.1/localhost . Use the -b parameter with the script to specify an IP address. For standalone mode, set the IP address as follows: For clustered mode, set the IP address as follows: Report a bug | [
"USDJDG_HOME/bin/standalone.sh -b USD{IP_ADDRESS}",
"USDJDG_HOME/bin/clustered.sh -b USD{IP_ADDRESS}"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/set_an_ip_address_to_run_red_hat_jboss_data_grid |
Chapter 1. Introduction | Chapter 1. Introduction This document describes how to use the RHEL HA Add-On on RHEL 9 to set up an HA cluster to automate a 'performance-optimized' SAP HANA Scale-Up System Replication setup. 'Performance-optimized' means that there is only a single SAP HANA instance running on each node that has control over most of the resources (CPU, RAM) on each node, which means the SAP HANA instances can run with as much performance as possible. Since the secondary SAP HANA instance is configured to pre-load all data in this scenario, a takeover in case of a failure of the primary SAP HANA instance should happen quickly. The following diagram shows an overview of what the setup looks like: With a 'performance-optimized' SAP HANA System Replication setup it is also possible to use the Active/Active (Read Enabled) SAP HANA System Replication configuration, which will allow read-only access for clients on the secondary SAP HANA instance. In addition to the basic setup for managing 'performance-optimized ' SAP HANA Scale-Up System Replication this document also provides optional instructions for the additional cluster configuration that is required for managing an Active/Active (Read Enabled) SAP HANA Scale-Up System Replication configuration. The resource agents and the cluster configuration used for the setup described in this document have been developed based on guidelines provided by SAP in SAP Note 2063657 - SAP HANA System Replication Takeover Decision Guideline . This document does not cover the installation and configuration of RHEL 9 for running SAP HANA or the SAP HANA installation procedure. Please take a look at Installing RHEL 9 for SAP Solutions for information on how to install and configure RHEL 9 for running SAP HANA on each HA cluster node and refer to the SAP HANA Installation guide and the guidelines from the hardware vendor/cloud provider for installing the SAP HANA instances. The setup described in this document was done using on-premise 'bare-metal' servers. If you plan to use such a setup on a public cloud environment like AWS, Azure or GCP, please check the documentation for the specific platform: HA Solutions for 'performance optimized' SAP HANA Scale-Up System Replication- Configuration Guides . 1.1. Support policies Please refer to Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster . 1.2. Required subscription and repositories As documented in SAP Note 3108302 - SAP HANA DB: Recommended OS Settings for RHEL 9 , a RHEL for SAP Solutions subscription is required for every RHEL 9 system running SAP HANA. In addition to the standard repos for running SAP HANA on RHEL 9, all HA cluster nodes must also have the repo for the RHEL HA Add-On enabled. The list of enabled repos should look similar to the following: See RHEL for SAP Subscriptions and Repositories , for more information on how to ensure the correct subscription and repos are enabled on each HA cluster node. | [
"dnf repolist repo id repo name status rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 8,603 rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) 3,690 rhel-9-for-x86_64-highavailability-rpms Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) 156 rhel-9-for-x86_64-sap-solutions-rpms Red Hat Enterprise Linux 9 for x86_64 - SAP Solutions (RPMs) 10"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_overview_v9-automating-sap-hana-scale-up-system-replication |
Chapter 25. CSimple | Chapter 25. CSimple The CSimple language is compiled Simple language. 25.1. Dependencies When using csimple with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 25.2. Different between CSimple and Simple The simple language is a dynamic expression language which is runtime parsed into a set of Camel Expressions or Predicates. The csimple language is parsed into regular Java source code and compiled together with all the other source code, or compiled once during bootstrap via the camel-csimple-joor module. The simple language is generally very lightweight and fast, however for some use-cases with dynamic method calls via OGNL paths, then the simple language does runtime introspection and reflection calls. This has an overhead on performance, and was one of the reasons why csimple was created. The csimple language requires to be typesafe and method calls via OGNL paths requires to know the type during parsing. This means for csimple languages expressions you would need to provide the class type in the script, whereas simple introspects this at runtime. In other words the simple language is using duck typing (if it looks like a duck, and quacks like a duck, then it is a duck) and csimple is using Java type (typesafety). If there is a type error then simple will report this at runtime, and with csimple there will be a Java compilation error. 25.2.1. Additional CSimple functions The csimple language includes some additional functions to support common use-cases working with Collection , Map or array types. The following functions bodyAsIndex , headerAsIndex , and exchangePropertyAsIndex is used for these use-cases as they are typed. Function Type Description bodyAsIndex( type , index ) Type To be used for collecting the body from an existing Collection , Map or array (lookup by the index) and then converting the body to the given type determined by its classname. The converted body can be null. mandatoryBodyAsIndex( type , index ) Type To be used for collecting the body from an existing Collection , Map or array (lookup by the index) and then converting the body to the given type determined by its classname. Expects the body to be not null. headerAsIndex( key , type , index ) Type To be used for collecting a header from an existing Collection , Map or array (lookup by the index) and then converting the header value to the given type determined by its classname. The converted header can be null. mandatoryHeaderAsIndex( key , type , index ) Type To be used for collecting a header from an existing Collection , Map or array (lookup by the index) and then converting the header value to the given type determined by its classname. Expects the header to be not null. exchangePropertyAsIndex( key , type , index ) Type To be used for collecting an exchange property from an existing Collection , Map or array (lookup by the index) and then converting the exchange property to the given type determined by its classname. The converted exchange property can be null. mandatoryExchangePropertyAsIndex( key , type , index ) Type To be used for collecting an exchange property from an existing Collection , Map or array (lookup by the index) and then converting the exchange property to the given type determined by its classname. Expects the exchange property to be not null. For example given the following simple expression: This script has no type information, and the simple language will resolve this at runtime, by introspecting the message body and if it's a collection based then lookup the first element, and then invoke a method named getName via reflection. In csimple (compiled) we want to pre compile this and therefore the end user must provide type information with the bodyAsIndex function: 25.3. Compilation The csimple language is parsed into regular Java source code and compiled together with all the other source code, or it can be compiled once during bootstrap via the camel-csimple-joor module. There are two ways to compile csimple using the camel-csimple-maven-plugin generating source code at built time. using camel-csimple-joor which does runtime in-memory compilation during bootstrap of Camel. 25.3.1. Using camel-csimple-maven-plugin The camel-csimple-maven-plugin Maven plugin is used for discovering all the csimple scripts from the source code, and then automatic generate source code in the src/generated/java folder, which then gets compiled together with all the other sources. The maven plugin will do source code scanning of .java and .xml files (Java and XML DSL). The scanner limits to detect certain code patterns, and it may miss discovering some csimple scripts if they are being used in unusual/rare ways. The runtime compilation using camel-csimple-joor does not have this limitation. The benefit is all the csimple scripts will be compiled using the regular Java compiler and therefore everything is included out of the box as .class files in the application JAR file, and no additional dependencies is required at runtime. To use camel-csimple-maven-plugin you need to add it to your pom.xml file as shown: <plugins> <!-- generate source code for csimple languages --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-maven-plugin</artifactId> <version>USD{camel.version}</version> <executions> <execution> <id>generate</id> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> <!-- include source code generated to maven sources paths --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>add-source</goal> <goal>add-resource</goal> </goals> <configuration> <sources> <source>src/generated/java</source> </sources> <resources> <resource> <directory>src/generated/resources</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> </plugins> And then you must also add the build-helper-maven-plugin Maven plugin to include src/generated to the list of source folders for the Java compiler, to ensure the generated source code is compiled and included in the application JAR file. See the camel-example-csimple example at Camel Examples which uses the maven plugin. 25.3.2. Using camel-csimple-joor The jOOR library integrates with the Java compiler and performs runtime compilation of Java code. The supported runtime when using camel-simple-joor is intended for Java standalone, Spring Boot, Camel Quarkus and other microservices runtimes. It is not supported in OSGi, Camel Karaf or any kind of Java Application Server runtime. jOOR does not support runtime compilation with Spring Boot using fat jar packaging ( https://github.com/jOOQ/jOOR/issues/69 ), it works with exploded classpath. To use camel-simple-joor you simply just add it as dependency to the classpath: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-joor</artifactId> <version>{CamelSBProjectVersion}</version> </dependency> There is no need for adding Maven plugins to the pom.xml file. See the camel-example-csimple-joor example at Camel Examples which uses the jOOR compiler. 25.4. CSimple Language options The CSimple language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 25.5. Limitations Currently, the csimple language does not support: nested functions (aka functions inside functions) the null safe operator ( ? ). For example the following scripts cannot compile: Hello USD{bean:greeter(USD{body}, USD{header.counter})} USD{bodyAs(MyUser)?.address?.zip} > 10000 25.6. Auto imports The csimple language will automatically import from: 25.7. Configuration file You can configure the csimple language in the camel-csimple.properties file which is loaded from the root classpath. For example you can add additional imports in the camel-csimple.properties file by adding: You can also add aliases (key=value) where an alias will be used as a shorthand replacement in the code. Which allows to use echo() in the csimple language script such as: from("direct:hello") .transform(csimple("Hello echo()")) .log("You said USD{body}"); The echo() alias will be replaced with its value resulting in a script as: .transform(csimple("Hello USD{bodyAs(String)} USD{bodyAs(String)}")) 25.8. See Also See the Simple language as csimple has the same set of functions as simple language. 25.9. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>",
"Hello USD\\{body[0].name}",
"Hello USD\\{bodyAsIndex(com.foo.MyUser, 0).name}",
"<plugins> <!-- generate source code for csimple languages --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-maven-plugin</artifactId> <version>USD{camel.version}</version> <executions> <execution> <id>generate</id> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> <!-- include source code generated to maven sources paths --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>add-source</goal> <goal>add-resource</goal> </goals> <configuration> <sources> <source>src/generated/java</source> </sources> <resources> <resource> <directory>src/generated/resources</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> </plugins>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-joor</artifactId> <version>{CamelSBProjectVersion}</version> </dependency>",
"Hello USD{bean:greeter(USD{body}, USD{header.counter})}",
"USD{bodyAs(MyUser)?.address?.zip} > 10000",
"import java.util.*; import java.util.concurrent.*; import java.util.stream.*; import org.apache.camel.*; import org.apache.camel.util.*;",
"import com.foo.MyUser; import com.bar.*; import static com.foo.MyHelper.*;",
"echo()=USD{bodyAs(String)} USD{bodyAs(String)}",
"from(\"direct:hello\") .transform(csimple(\"Hello echo()\")) .log(\"You said USD{body}\");",
".transform(csimple(\"Hello USD{bodyAs(String)} USD{bodyAs(String)}\"))"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-csimple-language-starter |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.5 Documentation Data Grid 8.5 Component Details Supported Configurations for Data Grid 8.5 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_memcached_protocol_endpoint_with_data_grid/rhdg-docs_datagrid |
Chapter 4. OpenShift Data Foundation installation overview | Chapter 4. OpenShift Data Foundation installation overview OpenShift Data Foundation consists of multiple components managed by multiple operators. 4.1. Installed Operators When you install OpenShift Data Foundation from the Operator Hub, the following four separate Deployments are created: odf-operator : Defines the odf-operator Pod ocs-operator : Defines the ocs-operator Pod which runs processes for ocs-operator and its metrics-exporter in the same container. rook-ceph-operator : Defines the rook-ceph-operator Pod. mcg-operator : Defines the mcg-operator Pod. These operators run independently and interact with each other by creating customer resources (CRs) watched by the other operators. The ocs-operator is primarily responsible for creating the CRs to configure Ceph storage and Multicloud Object Gateway. The mcg-operator sometimes creates Ceph volumes for use by its components. 4.2. OpenShift Container Storage initialization The OpenShift Data Foundation bundle also defines an external plugin to the OpenShift Container Platform Console, adding new screens and functionality not otherwise available in the Console. This plugin runs as a web server in the odf-console-plugin Pod, which is managed by a Deployment created by the OLM at the time of installation. The ocs-operator automatically creates an OCSInitialization CR after it gets created. Only one OCSInitialization CR exists at any point in time. It controls the ocs-operator behaviors that are not restricted to the scope of a single StorageCluster , but only performs them once. When you delete the OCSInitialization CR, the ocs-operator creates it again and this allows you to re-trigger its initialization operations. The OCSInitialization CR controls the following behaviors: SecurityContextConstraints (SCCs) After the OCSInitialization CR is created, the ocs-operator creates various SCCs for use by the component Pods. Ceph Toolbox Deployment You can use the OCSInitialization to deploy the Ceph Toolbox Pod for the advanced Ceph operations. Rook-Ceph Operator Configuration This configuration creates the rook-ceph-operator-config ConfigMap that governs the overall configuration for rook-ceph-operator behavior. 4.3. Storage cluster creation The OpenShift Data Foundation operators themselves provide no storage functionality, and the desired storage configuration must be defined. After you install the operators, create a new StorageCluster , using either the OpenShift Container Platform console wizard or the CLI and the ocs-operator reconciles this StorageCluster . OpenShift Data Foundation supports a single StorageCluster per installation. Any StorageCluster CRs created after the first one is ignored by ocs-operator reconciliation. OpenShift Data Foundation allows the following StorageCluster configurations: Internal In the Internal mode, all the components run containerized within the OpenShift Container Platform cluster and uses dynamically provisioned persistent volumes (PVs) created against the StorageClass specified by the administrator in the installation wizard. Internal-attached This mode is similar to the Internal mode but the administrator is required to define the local storage devices directly attached to the cluster nodes that the Ceph uses for its backing storage. Also, the administrator need to create the CRs that the local storage operator reconciles to provide the StorageClass . The ocs-operator uses this StorageClass as the backing storage for Ceph. External In this mode, Ceph components do not run inside the OpenShift Container Platform cluster instead connectivity is provided to an external OpenShift Container Storage installation for which the applications can create PVs. The other components run within the cluster as required. MCG Standalone This mode facilitates the installation of a Multicloud Object Gateway system without an accompanying CephCluster. After a StorageCluster CR is found, ocs-operator validates it and begins to create subsequent resources to define the storage components. 4.3.1. Internal mode storage cluster Both internal and internal-attached storage clusters have the same setup process as follows: StorageClasses Create the storage classes that cluster applications use to create Ceph volumes. SnapshotClasses Create the volume snapshot classes that the cluster applications use to create snapshots of Ceph volumes. Ceph RGW configuration Create various Ceph object CRs to enable and provide access to the Ceph RGW object storage endpoint. Ceph RBD Configuration Create the CephBlockPool CR to enable RBD storage. CephFS Configuration Create the CephFilesystem CR to enable CephFS storage. Rook-Ceph Configuration Create the rook-config-override ConfigMap that governs the overall behavior of the underlying Ceph cluster. CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator . For more information, see Rook-Ceph operator . NoobaaSystem Create the NooBaa CR to trigger reconciliation from mcg-operator . For more information, see MCG operator . Job templates Create OpenShift Template CRs that define Jobs to run administrative operations for OpenShift Container Storage. Quickstarts Create the QuickStart CRs that display the quickstart guides in the Web Console. 4.3.1.1. Cluster Creation After the ocs-operator creates the CephCluster CR, the rook-operator creates the Ceph cluster according to the desired configuration. The rook-operator configures the following components: Ceph mon daemons Three Ceph mon daemons are started on different nodes in the cluster. They manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for each mon is backed either by a PV if it is in a cloud environment or a path on the local host if it is in a local storage device environment. Ceph mgr daemon This daemon is started and it gathers metrics for the cluster and report them to Prometheus. Ceph OSDs These OSDs are created according to the configuration of the storageClassDeviceSets . Each OSD consumes a PV that stores the user data. By default, Ceph maintains three replicas of the application data across different OSDs for high durability and availability using the CRUSH algorithm. CSI provisioners These provisioners are started for RBD and CephFS . When volumes are requested for the storage classes of OpenShift Container Storage, the requests are directed to the Ceph-CSI driver to provision the volumes in Ceph. CSI volume plugins and CephFS The CSI volume plugins for RBD and CephFS are started on each node in the cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted by the applications. After the CephCluster CR is configured, Rook reconciles the remaining Ceph CRs to complete the setup: CephBlockPool The CephBlockPool CR provides the configuration for Rook operator to create Ceph pools for RWO volumes. CephFilesystem The CephFilesystem CR instructs the Rook operator to configure a shared file system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage the shared volumes. CephObjectStore The CephObjectStore CR instructs the Rook operator to configure an object store with the RGW service CephObjectStoreUser CR The CephObjectStoreUser CR instructs the Rook operator to configure an object store user for NooBaa to consume, publishing access/private key as well as the CephObjectStore endpoint. The operator monitors the Ceph health to ensure that storage platform remains healthy. If a mon daemon goes down for too long a period (10 minutes), Rook starts a new mon in its place so that the full quorum can be fully restored. When the ocs-operator updates the CephCluster CR, Rook immediately responds to the requested changes to update the cluster configuration. 4.3.1.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.2. External mode storage cluster For external storage clusters, ocs-operator follows a slightly different setup process. The ocs-operator looks for the existence of the rook-ceph-external-cluster-details ConfigMap , which must be created by someone else, either the administrator or the Console. For information about how to create the ConfigMap , see Creating an OpenShift Data Foundation Cluster for external mode . The ocs-operator then creates some or all of the following resources, as specified in the ConfigMap : External Ceph Configuration A ConfigMap that specifies the endpoints of the external mons . External Ceph Credentials Secret A Secret that contains the credentials to connect to the external Ceph instance. External Ceph StorageClasses One or more StorageClasses to enable the creation of volumes for RBD, CephFS, and/or RGW. Enable CephFS CSI Driver If a CephFS StorageClass is specified, configure rook-ceph-operator to deploy the CephFS CSI Pods. Ceph RGW Configuration If an RGW StorageClass is specified, create various Ceph Object CRs to enable and provide access to the Ceph RGW object storage endpoint. After creating the resources specified in the ConfigMap , the StorageCluster creation process proceeds as follows: CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator (see subsequent sections). SnapshotClasses Create the SnapshotClasses that applications use to create snapshots of Ceph volumes. NoobaaSystem Create the NooBaa CR to trigger reconciliation from noobaa-operator (see subsequent sections). QuickStarts Create the Quickstart CRs that display the quickstart guides in the Console. 4.3.2.1. Cluster Creation The Rook operator performs the following operations when the CephCluster CR is created in external mode: The operator validates that a connection is available to the remote Ceph cluster. The connection requires mon endpoints and secrets to be imported into the local cluster. The CSI driver is configured with the remote connection to Ceph. The RBD and CephFS provisioners and volume plugins are started similarly to the CSI driver when configured in internal mode, the connection to Ceph happens to be external to the OpenShift cluster. Periodically watch for monitor address changes and update the Ceph-CSI configuration accordingly. 4.3.2.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3. MCG Standalone StorageCluster In this mode, no CephCluster is created. Instead a NooBaa system CR is created using default values to take advantage of pre-existing StorageClasses in the OpenShift Container Platform. dashboards. 4.3.3.1. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3.2. StorageSystem Creation As a part of the StorageCluster creation, odf-operator automatically creates a corresponding StorageSystem CR, which exposes the StorageCluster to the OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_installation_overview |
5.168. libxml2 | 5.168. libxml2 5.168.1. RHSA-2012:1512 - Important: libxml2 security update Updated libxml2 packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The libxml2 library is a development toolbox providing the implementation of various XML standards. Security Fix CVE-2012-5134 A heap-based buffer underflow flaw was found in the way libxml2 decoded certain entities. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. All users of libxml2 are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. The desktop must be restarted (log out, then log back in) for this update to take effect. 5.168.2. RHSA-2012:1288 - Moderate: libxml2 security update Updated libxml2 packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libxml2 library is a development toolbox providing the implementation of various XML standards. Security Fixes CVE-2012-2807 Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in the way libxml2 handled documents that enable entity expansion. A remote attacker could provide a large, specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2011-3102 A one byte buffer overflow was found in the way libxml2 evaluated certain parts of XML Pointer Language (XPointer) expressions. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. All users of libxml2 are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libxml2 |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements A list of all major enhancements, and new features introduced in this release of Red Hat Trusted Artifact Signer (RHTAS). The features and enhancements added by this release are: Log sharding for Rekor If left alone, Rekor's log will grow indefinitely, which can impact overall performance. With this release, we added log sharding for Rekor to help manage scaling, and minimizing any potential performance degradation from having large logs. You can configure sharding by directly modifying the Rekor custom resource (CR). For more information about how to configure log sharding for Rekor's signer key rotation, see the RHTAS Administration Guide . Log sharding for CT log If left alone, Certificate Transparency (CT) log will grow indefinitely, which can impact overall performance. With this release, we added log sharding for CT log to help manage scaling, and minimizing any potential performance degradation from having large logs. You can configure sharding by directly modifying the CT log custom resource (CR). For more information about how to configure log sharding for CT log's signer key rotation, see the RHTAS Administration Guide . Deploy Trillian independently With this release, you can deploy the Trillian service independently of all other RHTAS components. You can now deploy an independent version of Trillian that uses the RHTAS operator. Deploy Rekor independently from Trillian In earlier releases of RHTAS, Rekor required the Trillian service, along with the Trillian database, to be running in the same namespace as Rekor. Because of this dependency, deploying Rekor in complex or segmented environments was more challenging. With this release, we made Rekor independent from Trillian, giving users the flexibility to implement Trillian in a way that is more adaptable to complex infrastructure configurations. Because of this new feature, we extended the API, which allows you to specify connection information for the Trillian service. You can specify the Trillian connection information by providing the appropriate values to the spec.trillian.host and spec.trillian.port options in the Securesign resource. Proxy support for the Trusted Artifact Signer operator Connections are often established by using proxies in OpenShift environments, and this might be a hard requirement for some organizations. With this release, we added support for configured proxies in OpenShift environments to the RHTAS operator and operands. Trusted Timestamp Authority support added By default, the timestamp comes from Rekor's own internal clock, which is not externally verifiable or immutable. By using signed timestamps from trusted Timestamp Authorities (TSAs) this mitigates the risk of Rekor's internal clock being tampered with. With this release, you can configure a trusted TSA instead of using Rekor's internal clock. Support for custom Rekor UI route for Ingress sharding With this release, you can set a custom route for the Rekor user interface (UI) to work with OpenShift's Ingress Controller sharding feature. You can configure this by modifying the externalAccess section of ingress and route resources, adding the type: dev label under the routeSelectorlabels section. For example: ... externalAccess: enabled: true routeSelectorLabels: type: dev ... This allows the Ingress Controller to identify these resources for specific preset routes, in this case the dev route. The operator supports custom CA bundles with certificate injection With this release, the RHTAS operator now supports custom Certificate Authority (CA) bundles by using certificate injection. To ensure secure communications with OpenShift Proxy or other services needing to trust a specific CA, the RHTAS operator automatically injects trusted CA bundles into its managed services. These managed services are: Trillian, Fulcio, Rekor, Certificate Transparency (CT) log, and Timestamp Authority (TSA). You can trust additional CA bundles by referencing the config map containing the custom CA bundle in one of two ways: In the relevant custom resource (CR) file, under the metadata.annotations section, add rhtas.redhat.com/trusted-ca . Configure a custom CA bundle directly in the CR file by adding the trustedCA field in the spec . Configure a CT log prefix for Fulcio With this release, we added the ability to configure a Certificate Transparency (CT) log prefix for Fulcio. In earlier releases, we hard-coded the prefix to trusted-artifact-signer . Making the prefix configurable, gives you more flexibility, and allows you to target specific CT logs within the CT service. The Fulcio custom resource definition (CRD) has a new spec.ctlog.prefix field, where you can set the prefix. Enterprise Contract can initialize the TUF root With this release, you can now use the ec sigstore initialize --root USD{TUF_URL} command to initialize Enterprise Contract with The Update Framework (TUF) root deployed by RHTAS. Doing this initialization stores the trusted metadata locally in USDHOME/.sigstore/root . Support for excluding rules for specific images in an Enterprise Contract policy With this release, you can add an exclude directive in the volatileConfig section of an Enterprise Contract (EC) policy for a specific image digest. You can specify an image digest by using the imageRef key, which limits the policy exception to one specific image. Support for organizational level OCI registry authentication With this release, Enterprise Contract (EC) supports Open Container Initiative (OCI) registry credentials specified by using a subpath of the full repository path. If many matching credentials are available, then it tries them in order of specificity. For more information, see the authentication against container image registries specification . Improved the auditing of Enterprise Contract policy sources With this release, we log an entry for a Git SHA, or a bundle image digest for each policy source. This allows for better auditing of Enterprise Contract (EC) results, showing you the exact version of the policies and policy data used by EC, allowing for reproducibility. Displaying plain text as the default for Enterprise Contract reports With this release, we changed the default output format for the Enterprise Contract (EC) report to plain text. The plain text format makes reading the EC results report much easier. | [
"externalAccess: enabled: true routeSelectorLabels: type: dev"
] | https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1.1/html/release_notes/enhancements |
2.2. Fencing Overview | 2.2. Fencing Overview In a cluster system, there can be many nodes working on several pieces of vital production data. Nodes in a busy, multi-node cluster could begin to act erratically or become unavailable, prompting action by administrators. The problems caused by errant cluster nodes can be mitigated by establishing a fencing policy. Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the STONITH facility. When Pacemaker determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. STONITH fences the failed node when notified of the failure. Other cluster-infrastructure components determine what actions to take, which includes performing any recovery that needs to done. For example, DLM and GFS2, when notified of a node failure, suspend activity until they detect that STONITH has completed fencing the failed node. Upon confirmation that the failed node is fenced, DLM and GFS2 perform recovery. DLM releases locks of the failed node; GFS2 recovers the journal of the failed node. Node-level fencing through STONITH can be configured with a variety of supported fence devices, including: Uninterruptible Power Supply (UPS) - a device containing a battery that can be used to fence devices in event of a power failure Power Distribution Unit (PDU) - a device with multiple power outlets used in data centers for clean power distribution as well as fencing and power isolation services Blade power control devices - dedicated systems installed in a data center configured to fence cluster nodes in the event of failure Lights-out devices - Network-connected devices that manage cluster node availability and can perform fencing, power on/off, and other services by administrators locally or remotely | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-fencing-haao |
Chapter 5. Networking Operators overview | Chapter 5. Networking Operators overview OpenShift Container Platform supports multiple types of networking Operators. You can manage the cluster networking using these networking Operators. 5.1. Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform . 5.2. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. For more information, see DNS Operator in OpenShift Container Platform . 5.3. Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to external clients. The Ingress Operator implements the Ingress Controller API and is responsible for enabling external access to OpenShift Container Platform cluster services. For more information, see Ingress Operator in OpenShift Container Platform . 5.4. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. For more information, see Understanding the External DNS Operator . 5.5. Ingress Node Firewall Operator The Ingress Node Firewall Operator uses an extended Berkley Packet Filter (eBPF) and eXpress Data Path (XDP) plugin to process node firewall rules, update statistics and generate events for dropped traffic. The operator manages ingress node firewall resources, verifies firewall configuration, does not allow incorrectly configured rules that can prevent cluster access, and loads ingress node firewall XDP programs to the selected interfaces in the rule's object(s). For more information, see Understanding the Ingress Node Firewall Operator 5.6. Network Observability Operator The Network Observability Operator is an optional Operator that allows cluster administrators to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information and stored in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see About Network Observability Operator . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/networking-operators-overview |
10.9. Change the Default JDBC Port Using Management Console | 10.9. Change the Default JDBC Port Using Management Console Login to the Management Console. Navigate to the Socket Binding panel in the Management Console Standalone Mode Select the Profile tab from the top-right of the console. Domain Mode Select the Profiles tab from the top-right of the console. Select the appropriate profile from the drop-down box in the top left. Expand the Subsystems menu on the left of the console. Select General Configuration Socket Binding from the menu on the left of the console. Modify the port number Select the teiid-jdbc configuration. Select the Edit button. Set the Port to the new port number. Select Save . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/change_the_default_jdbc_port_using_management_console |
Chapter 16. Configuring Routes | Chapter 16. Configuring Routes 16.1. Route configuration 16.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift If you examine the resulting Route resource, it should look similar to the following: YAML definition of the created unsecured route: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 to: kind: Service name: hello-openshift 1 <Ingress_Domain> is the default ingress domain name. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 16.1.2. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 16.1.3. Enabling HTTP strict transport security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which ensures that only HTTPS traffic is allowed on the host. Any HTTP requests are dropped by default. This is useful for ensuring secure interactions with websites, or to offer a secure application for the user's benefit. When HSTS is enabled, HSTS adds a Strict Transport Security header to HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect to send HTTP to HTTPS. However, when HSTS is enabled, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. This is not required to be supported by the client, and can be disabled by setting max-age=0 . Important HSTS works only with secure routes (either edge terminated or re-encrypt). The configuration is ineffective on HTTP or passthrough routes. Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge terminated or re-encrypt route: apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 1 max-age is the only required parameter. It measures the length of time, in seconds, that the HSTS policy is in effect. The client updates max-age whenever a response with a HSTS header is received from the host. When max-age times out, the client discards the policy. 2 includeSubDomains is optional. When included, it tells the client that all subdomains of the host are to be treated the same as the host. 3 preload is optional. When max-age is greater than 0, then including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS to get the header. 16.1.4. Troubleshooting throughput issues Sometimes applications deployed through OpenShift Container Platform can cause network throughput issues such as unusually high latency between specific services. Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes, to locate any bottlenecks. For information on installing and using iperf, see this Red Hat Solution . 16.1.5. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 16.1.5.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 16.1.6. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. However, this depends on the router implementation. The following table shows example routes and their accessibility: Table 16.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 16.1.7. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 16.2. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are source , roundrobin , and leastconn . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets a whitelist for the route. The whitelist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the whitelist are dropped. The maximum number of IP addresses and CIDR ranges allowed in a whitelist is 61. haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/log-send-hostname Sets the hostname field in the Syslog header. Uses the hostname of the system. log-send-hostname is enabled by default if any Ingress API logging method, such as sidecar or Syslog facility, is enabled for the router. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : cookies are transferred between the visited site and third-party sites. Strict : cookies are restricted to the visited site. None : cookies are restricted to the visited site. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 16.3. rewrite-target examples: Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar 16.1.8. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... 16.1.9. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 spec: rules: - host: www.example.com http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend 16.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 16.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 16.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 16.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. | [
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 spec: rules: - host: www.example.com http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/configuring-routes |
Chapter 1. About OpenShift Container Platform monitoring | Chapter 1. About OpenShift Container Platform monitoring 1.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to enable monitoring for user-defined projects . A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can access metrics and manage alerts . After installing OpenShift Container Platform, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. As a cluster administrator, you can find answers to common problems such as user metrics unavailability and high consumption of disk space by Prometheus in Troubleshooting monitoring issues . 1.2. Monitoring stack architecture The OpenShift Container Platform monitoring stack is based on the Prometheus open source project and its wider ecosystem. The monitoring stack includes default monitoring components and components for monitoring user-defined projects. 1.2.1. Understanding the monitoring stack The monitoring stack includes the following components: Default platform monitoring components . A set of platform monitoring components are installed in the openshift-monitoring project by default during an OpenShift Container Platform installation. This provides monitoring for core cluster components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters. These components are illustrated in the Installed by default section in the following diagram. Components for monitoring user-defined projects . After optionally enabling monitoring for user-defined projects, additional monitoring components are installed in the openshift-user-workload-monitoring project. This provides monitoring for user-defined projects. These components are illustrated in the User section in the following diagram. 1.2.2. Default monitoring components By default, the OpenShift Container Platform 4.16 monitoring stack includes these components: Table 1.1. Default monitoring stack components Component Description Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys, manages, and automatically updates Prometheus and Alertmanager instances, Thanos Querier, Telemeter Client, and metrics targets. The CMO is deployed by the Cluster Version Operator (CVO). Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus instances and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Metrics Server The Metrics Server component (MS in the preceding diagram) collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. Note that with the OpenShift Container Platform 4.16 release, Metrics Server replaces Prometheus Adapter. Alertmanager The Alertmanager service handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. kube-state-metrics agent The kube-state-metrics exporter agent (KSM in the preceding diagram) converts Kubernetes objects to metrics that Prometheus can use. monitoring-plugin The monitoring-plugin dynamic plugin component deploys the monitoring pages in the Observe section of the OpenShift Container Platform web console. You can use Cluster Monitoring Operator config map settings to manage monitoring-plugin resources for the web console pages. openshift-state-metrics agent The openshift-state-metrics exporter (OSM in the preceding diagram) expands upon kube-state-metrics by adding metrics for OpenShift Container Platform-specific resources. node-exporter agent The node-exporter agent (NE in the preceding diagram) collects metrics about every node in a cluster. The node-exporter agent is deployed on every node. Thanos Querier Thanos Querier aggregates and optionally deduplicates core OpenShift Container Platform metrics and metrics for user-defined projects under a single, multi-tenant interface. Telemeter Client Telemeter Client sends a subsection of the data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. Note All components of the monitoring stack use the TLS security profile settings that are centrally configured by a cluster administrator. If you configure a monitoring stack component that uses TLS security settings, the component uses the TLS security profile settings that already exist in the tlsSecurityProfile field in the global OpenShift Container Platform apiservers.config.openshift.io/cluster resource. 1.2.2.1. Default monitoring targets In addition to the components of the stack itself, the default monitoring stack monitors additional platform components. The following are examples of monitoring targets: CoreDNS etcd HAProxy Image registry Kubelets Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift Controller Manager Operator Lifecycle Manager (OLM) Note The exact list of targets can vary depending on your cluster capabilities and installed components. Each OpenShift Container Platform component is responsible for its monitoring configuration. For problems with the monitoring of an OpenShift Container Platform component, open a Jira issue against that component, not against the general monitoring component. Other OpenShift Container Platform framework components might be exposing metrics as well. For details, see their respective documentation. Additional resources Getting detailed information about a metrics target 1.2.3. Components for monitoring user-defined projects OpenShift Container Platform includes an optional enhancement to the monitoring stack that enables you to monitor services and pods in user-defined projects. This feature includes the following components: Table 1.2. Components for monitoring user-defined projects Component Description Prometheus Operator The Prometheus Operator (PO) in the openshift-user-workload-monitoring project creates, configures, and manages Prometheus and Thanos Ruler instances in the same project. Prometheus Prometheus is the monitoring system through which monitoring is provided for user-defined projects. Prometheus sends alerts to Alertmanager for processing. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform , Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Alertmanager The Alertmanager service handles alerts received from Prometheus and Thanos Ruler. Alertmanager is also responsible for sending user-defined alerts to external notification systems. Deploying this service is optional. Note The components in the preceding table are deployed after monitoring is enabled for user-defined projects. All of these components are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.3.1. Monitoring targets for user-defined projects When monitoring is enabled for user-defined projects, you can monitor: Metrics provided through service endpoints in user-defined projects. Pods running in user-defined projects. 1.2.4. The monitoring stack in high-availability clusters By default, in multi-node clusters, the following components run in high-availability (HA) mode to prevent data loss and service interruption: Prometheus Alertmanager Thanos Ruler Thanos Querier Metrics Server Monitoring plugin The component is replicated across two pods, each running on a separate node. This means that the monitoring stack can tolerate the loss of one pod. Prometheus in HA mode Both replicas independently scrape the same targets and evaluate the same rules. The replicas do not communicate with each other. Therefore, data might differ between the pods. Alertmanager in HA mode The two replicas synchronize notification and silence states with each other. This ensures that each notification is sent at least once. If the replicas fail to communicate or if there is an issue on the receiving side, notifications are still sent, but they might be duplicated. Important Prometheus, Alertmanager, and Thanos Ruler are stateful components. To ensure high availability, you must configure them with persistent storage. Additional resources High-availability or single-node cluster detection and support Configuring persistent storage Configuring performance and scalability 1.2.5. Glossary of common terms for OpenShift Container Platform monitoring This glossary defines common terms that are used in OpenShift Container Platform architecture. Alertmanager Alertmanager handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. Alerting rules Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances such as, the Thanos Querier, the Telemeter Client, and metrics targets to ensure that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO). Cluster Version Operator The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container A container is a lightweight and executable image that includes software and all its dependencies. Containers virtualize the operating system. As a result, you can run containers anywhere from a data center to a public or private cloud as well as a developer's laptop. custom resource (CR) A CR is an extension of the Kubernetes API. You can create custom resources. etcd etcd is the key-value store for OpenShift Container Platform, which stores the state of all resource objects. Fluentd Fluentd is a log collector that resides on each OpenShift Container Platform node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Kubelets Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Kubernetes controller manager Kubernetes controller manager governs the state of the cluster. Kubernetes scheduler Kubernetes scheduler allocates pods to nodes. labels Labels are key-value pairs that you can use to organize and select subsets of objects such as a pod. Metrics Server The Metrics Server monitoring component collects resource metrics and exposes them in the metrics.k8s.io Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Operator Lifecycle Manager (OLM) OLM helps you install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. Persistent volume claim (PVC) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Silences A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Vector Vector is a log collector that deploys to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. web console A user interface (UI) to manage OpenShift Container Platform. 1.2.6. Additional resources About remote health monitoring Granting users permissions for monitoring for user-defined projects Configuring TLS security profiles 1.3. Understanding the monitoring stack - key concepts Get familiar with the OpenShift Container Platform monitoring concepts and terms. Learn about how you can improve performance and scale of your cluster, store and record data, manage metrics and alerts, and more. 1.3.1. About performance and scalability You can optimize the performance and scale of your clusters. You can configure the default monitoring stack by performing any of the following actions: Control the placement and distribution of monitoring components: Use node selectors to move components to specific nodes. Assign tolerations to enable moving components to tainted nodes. Use pod topology spread constraints. Set the body size limit for metrics scraping. Manage CPU and memory resources. Use metrics collection profiles. Additional resources Configuring performance and scalability for core platform monitoring Configuring performance and scalability for user workload monitoring 1.3.1.1. Using node selectors to move monitoring components By using the nodeSelector constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. How node selectors work with other constraints If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster: Topology spread constraints might be in place to control pod placement. Hard anti-affinity rules are in place for Prometheus, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes. Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node. To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component. 1.3.1.2. About pod topology spread constraints for monitoring You can use pod topology spread constraints to control how the monitoring pods are spread across a network topology when OpenShift Container Platform pods are deployed in multiple availability zones. Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios. You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. 1.3.1.3. About specifying limits and requests for monitoring components You can configure resource limits and requests for the following core platform monitoring components: Alertmanager kube-state-metrics monitoring-plugin node-exporter openshift-state-metrics Prometheus Metrics Server Prometheus Operator and its admission webhook service Telemeter Client Thanos Querier You can configure resource limits and requests for the following components that monitor user-defined projects: Alertmanager Prometheus Thanos Ruler By defining the resource limits, you limit a container's resource usage, which prevents the container from exceeding the specified maximum values for CPU and memory resources. By defining the resource requests, you specify that a container can be scheduled only on a node that has enough CPU and memory resources available to match the requested resources. 1.3.1.4. About metrics collection profiles Important Metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, Prometheus collects metrics exposed by all default metrics targets in OpenShift Container Platform components. However, you might want Prometheus to collect fewer metrics from a cluster in certain scenarios: If cluster administrators require only alert, telemetry, and console metrics and do not require other metrics to be available. If a cluster increases in size, and the increased size of the default metrics data collected now requires a significant increase in CPU and memory resources. You can use a metrics collection profile to collect either the default amount of metrics data or a minimal amount of metrics data. When you collect minimal metrics data, basic monitoring features such as alerting continue to work. At the same time, the CPU and memory resources required by Prometheus decrease. You can enable one of two metrics collection profiles: full : Prometheus collects metrics data exposed by all platform components. This setting is the default. minimal : Prometheus collects only the metrics data required for platform alerts, recording rules, telemetry, and console dashboards. 1.3.2. About storing and recording data You can store and record data to help you protect the data and use them for troubleshooting. You can configure the default monitoring stack by performing any of the following actions: Configure persistent storage: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. Modify the retention time and size for Prometheus and Thanos Ruler metrics data. Configure logging to help you troubleshoot issues with your cluster: Configure audit logs for Metrics Server. Set log levels for monitoring. Enable the query logging for Prometheus and Thanos Querier. Additional resources Storing and recording data for core platform monitoring Storing and recording data for user workload monitoring 1.3.2.1. Retention time and size for Prometheus metrics By default, Prometheus retains metrics data for the following durations: Core platform monitoring : 15 days Monitoring for user-defined projects : 24 hours You can modify the retention time for the Prometheus instance to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. Note the following behaviors of these data retention settings: The size-based retention policy applies to all data block directories in the /prometheus directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. Data in the /wal and /head_chunks directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. Thus, if you set a retention size limit lower than the maximum size set for the /wal and /head_chunks directories, you have configured the system not to retain any data blocks in the /prometheus data directories. The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data. If you do not explicitly define values for either retention or retentionSize , retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. If you define values for both retention and retentionSize , both values apply. If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. If you define a value for retentionSize and do not define retention , only the retentionSize value applies. If you do not define a value for retentionSize and only define a value for retention , only the retention value applies. If you set the retentionSize or retention value to 0 , the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. 1.3.3. Understanding metrics In OpenShift Container Platform 4.16, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example application and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources Configuring metrics for core platform monitoring Configuring metrics for user workload monitoring Accessing metrics as an administrator Accessing metrics as a developer 1.3.3.1. Controlling the impact of unbound metrics attributes in user-defined projects Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. 1.3.3.2. Adding cluster ID labels to metrics If you manage multiple OpenShift Container Platform clusters and use the remote write feature to send metrics data from these clusters to an external storage location, you can add cluster ID labels to identify the metrics data coming from different clusters. You can then query these labels to identify the source cluster for a metric and distinguish that data from similar metrics data sent by other clusters. This way, if you manage many clusters for multiple customers and send metrics data to a single centralized storage system, you can use cluster ID labels to query metrics for a particular cluster or customer. Creating and using cluster ID labels involves three general steps: Configuring the write relabel settings for remote write storage. Adding cluster ID labels to the metrics. Querying these labels to identify the source cluster or customer for a metric. 1.3.4. About monitoring dashboards OpenShift Container Platform provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. Additional resources Reviewing monitoring dashboards as a cluster administrator Reviewing monitoring dashboards as a developer 1.3.4.1. Monitoring dashboards in the Administrator perspective Use the Administrator perspective to access dashboards for the core OpenShift Container Platform components, including the following items: API performance etcd Kubernetes compute resources Kubernetes network resources Prometheus USE method dashboards relating to cluster and node performance Node performance metrics Figure 1.1. Example dashboard in the Administrator perspective 1.3.4.2. Monitoring dashboards in the Developer perspective Use the Developer perspective to access Kubernetes compute resources dashboards that provide the following application metrics for a selected project: CPU usage Memory usage Bandwidth information Packet rate information Figure 1.2. Example dashboard in the Developer perspective 1.3.5. Managing alerts In the OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Container Platform cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. Additional resources Configuring alerts and notifications for core platform monitoring Configuring alerts and notifications for user workload monitoring Managing alerts as an Administrator Managing alerts as a Developer 1.3.5.1. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in both the Administrator and Developer perspectives. After you create a silence, you will not receive notifications about an alert when the alert fires. Creating silences is useful in scenarios where you have received an initial alert notification, and you do not want to receive further notifications during the time in which you resolve the underlying issue causing the alert to fire. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. After you create silences, you can view, edit, and expire them. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. 1.3.5.2. Managing alerting rules for core platform monitoring The OpenShift Container Platform monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert. Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring namespace. Core platform alerting rule considerations New alerting rules must be based on the default OpenShift Container Platform monitoring metrics. You must create the AlertingRule and AlertRelabelConfig objects in the openshift-monitoring namespace. You can only add and modify alerting rules. You cannot create new recording rules or modify existing recording rules. If you modify existing platform alerting rules by using an AlertRelabelConfig object, your modifications are not reflected in the Prometheus alerts API. Therefore, any dropped alerts still appear in the OpenShift Container Platform web console even though they are no longer forwarded to Alertmanager. Additionally, any modifications to alerts, such as a changed severity label, do not appear in the web console. 1.3.5.3. Tips for optimizing alerting rules for core platform monitoring If you customize core platform alerting rules to meet your organization's specific needs, follow these guidelines to help ensure that the customized rules are efficient and effective. Minimize the number of new rules . Create only rules that are essential to your specific requirements. By minimizing the number of rules, you create a more manageable and focused alerting system in your monitoring environment. Focus on symptoms rather than causes . Create rules that notify users of symptoms instead of underlying causes. This approach ensures that users are promptly notified of a relevant symptom so that they can investigate the root cause after an alert has triggered. This tactic also significantly reduces the overall number of rules you need to create. Plan and assess your needs before implementing changes . First, decide what symptoms are important and what actions you want users to take if these symptoms occur. Then, assess existing rules and decide if you can modify any of them to meet your needs instead of creating entirely new rules for each symptom. By modifying existing rules and creating new ones judiciously, you help to streamline your alerting system. Provide clear alert messaging . When you create alert messages, describe the symptom, possible causes, and recommended actions. Include unambiguous, concise explanations along with troubleshooting steps or links to more information. Doing so helps users quickly assess the situation and respond appropriately. Include severity levels . Assign severity levels to your rules to indicate how a user needs to react when a symptom occurs and triggers an alert. For example, classifying an alert as Critical signals that an individual or a critical response team needs to respond immediately. By defining severity levels, you help users know how to respond to an alert and help ensure that the most urgent issues receive prompt attention. 1.3.5.4. About creating alerting rules for user-defined projects If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules: A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project. For example, an alerting rule for the ns1 user-defined project can use metrics exposed by the ns1 project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a different ns2 user-defined project. To reduce latency and to minimize the load on core platform monitoring components, you can add the openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus label to a rule. This label forces only the Prometheus instance deployed in the openshift-user-workload-monitoring project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so. Important If an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts. 1.3.5.5. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can view, edit, and remove alerting rules in user-defined projects. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Container Platform cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 1.3.5.6. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. 1.3.5.7. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. 1.3.5.7.1. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 1.3.5.7.2. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Container Platform and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. 1.3.5.7.3. Understanding alerting rule filters In the Administrator perspective, the Alerting rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert state filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 1.3.5.7.4. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 1.3.6. Understanding alert routing for user-defined projects As a cluster administrator, you can enable alert routing for user-defined projects. With this feature, you can allow users with the alert-routing-edit cluster role to configure alert notification routing and receivers for user-defined projects. These notifications are routed by the default Alertmanager instance or, if enabled, an optional Alertmanager instance dedicated to user-defined monitoring. Users can then create and configure user-defined alert routing by creating or editing the AlertmanagerConfig objects for their user-defined projects without the help of an administrator. After a user has defined alert routing for a user-defined project, user-defined alert notifications are routed as follows: To the alertmanager-main pods in the openshift-monitoring namespace if using the default platform Alertmanager instance. To the alertmanager-user-workload pods in the openshift-user-workload-monitoring namespace if you have enabled a separate instance of Alertmanager for user-defined projects. Note Review the following limitations of alert routing for user-defined projects: For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Enabling alert routing for user-defined projects 1.3.7. Sending notifications to external systems In OpenShift Container Platform 4.16, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. Additional resources Configuring alert notifications for core platform monitoring Configuring alert notifications for user workload monitoring | [
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring/about-openshift-container-platform-monitoring |
Chapter 16. Impersonating the system:admin user | Chapter 16. Impersonating the system:admin user 16.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 16.2. Impersonating the system:admin user You can grant a user permission to impersonate system:admin , which grants them cluster administrator permissions. Procedure To grant a user permission to impersonate system:admin , run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username> Tip You can alternatively apply the following YAML to grant permission to impersonate system:admin : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 16.3. Impersonating the system:admin group When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups. Procedure To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2> | [
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/impersonating-system-admin |
9.2. Keepalived and HAProxy Replace Piranha as Load Balancer | 9.2. Keepalived and HAProxy Replace Piranha as Load Balancer Red Hat Enterprise Linux 7 replaces the Piranha Load Balancer technology with Keepalived and HAProxy . Keepalived provides simple and robust facilities for load balancing and high availability. The load-balancing framework relies on the well-known and widely used Linux Virtual Server kernel module providing Layer-4 (transport layer) load balancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage a load balanced server pool according to their health. Keepalived also implements the Virtual Router Redundancy Protocol (VRRPv2) to achieve high availability with director failover. HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments. HAProxy can: route HTTP requests depending on statically assigned cookies; spread the load among several servers while assuring server persistence through the use of HTTP cookies; switch to backup servers in the event a main server fails; accept connections to special ports dedicated to service monitoring; stop accepting connections without breaking existing ones; add, modify, and delete HTTP headers in both directions; block requests matching particular patterns; persist client connections to the correct application server depending on application cookies; report detailed status as HTML pages to authenticated users from a URI intercepted from the application. With Red Hat Enterprise Linux 7, the Load Balancer technology is now included in the base operating system and is no longer a Red Hat Enterprise Linux Add-On. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-clustering-keepalived_and_haproxy_replace_piranha_as_load_balancer |
Chapter 6. Back-end compression | Chapter 6. Back-end compression Compress an edge cluster of a smaller capacity with the compression options. BlueStore allows two types of compression: BlueStore level of compression for general workload. Ceph Object Gateway level of compression for S3 workload. For more information on compression algorithms, see Pool values . You need to enable compression and ensure that no crashes occur on the cluster upon enabling compression on pools. You can enable compression on the pools of the edge cluster in the following ways: Enable supported compression algorithms such as snappy, zlib, and zstd and enable supported compression modes such as None , passive , aggressive , and force with the following commands: Syntax Enable various compression ratios with the following commands: Syntax Create three pools and enable different compressions on those pools to ensure that no IO stoppage occurs on the pools. Create a fourth pool without any compression created on the pools. Write the same amount of data as pools with compression. The pool with compression uses less RAW space that the pool without compression. To verify these algorithms are set, use ceph osd pool get POOL_NAME OPTION_NAME command. To unset these algorithms, use ceph osd pool unset POOL_NAME OPTION_NAME command with the appropriate options. | [
"ceph osd pool set POOL_NAME compression_algorithm ALGORITHM ceph osd pool set POOL_NAME compression_mode MODE",
"ceph osd pool set POOL_NAME compression_required_ratio RATIO ceph osd pool set POOL_NAME compression_min_blob_size SIZE ceph osd pool set POOL_NAME compression_max_blob_size SIZE"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/edge_guide/back-end-compression_edge |
2.2.4. Securing NFS | 2.2.4. Securing NFS Important The version of NFS included in Red Hat Enterprise Linux 6, NFSv4, no longer requires the portmap service as outlined in Section 2.2.2, "Securing Portmap" . NFS traffic now utilizes TCP in all versions, rather than UDP, and requires it when using NFSv4. NFSv4 now includes Kerberos user and group authentication, as part of the RPCSEC_GSS kernel module. Information on portmap is still included, since Red Hat Enterprise Linux 6 supports NFSv2 and NFSv3, both of which utilize portmap . 2.2.4.1. Carefully Plan the Network NFSv2 and NFSv3 traditionally passed data insecurely. All versions of NFS now have the ability to authenticate (and optionally encrypt) ordinary file system operations using Kerberos. Under NFSv4 all operations can use Kerberos; under v2 or v3, file locking and mounting still do not use it. When using NFSv4.0, delegations may be turned off if the clients are behind NAT or a firewall. Refer to the section on pNFS in the Storage Administration Guide for information on the use of NFSv4.1 to allow delegations to operate through NAT and firewalls. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-securing_nfs |
Chapter 8. Removing Service Telemetry Framework from the Red Hat OpenShift Container Platform environment | Chapter 8. Removing Service Telemetry Framework from the Red Hat OpenShift Container Platform environment Remove Service Telemetry Framework (STF) from an Red Hat OpenShift Container Platform environment if you no longer require the STF functionality. To remove STF from the Red Hat OpenShift Container Platform environment, you must perform the following tasks: Delete the namespace. Remove the cert-manager Operator. Remove the Cluster Observability Operator. 8.1. Deleting the namespace To remove the operational resources for STF from Red Hat OpenShift Container Platform, delete the namespace. Procedure Run the oc delete command: USD oc delete project service-telemetry Verify that the resources have been deleted from the namespace: USD oc get all No resources found. 8.2. Removing the cert-manager Operator for Red Hat OpenShift If you are not using the cert-manager Operator for Red Hat OpenShift for any other applications, delete the Subscription, ClusterServiceVersion, and CustomResourceDefinitions. For more information about removing the cert-manager for Red Hat OpenShift Operator, see Removing cert-manager Operator for Red Hat OpenShift in the OpenShift Container Platform Documentation . Additional resources Deleting Operators from a cluster . 8.3. Removing the Cluster Observability Operator If you are not using the Cluster Observability Operator for any other applications, delete the Subscription, ClusterServiceVersion, and CustomResourceDefinitions. For more information about removing the Cluster Observability Operator, see Deleting Operators from a cluster using the web console in the OpenShift Container Platform Documentation . Additional resources Deleting Operators from a cluster . | [
"oc delete project service-telemetry",
"oc get all No resources found."
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-removing-stf-from-the-openshift-environment_assembly |
Chapter 142. KafkaConnector schema reference | Chapter 142. KafkaConnector schema reference Property Property type Description spec KafkaConnectorSpec The specification of the Kafka Connector. status KafkaConnectorStatus The status of the Kafka Connector. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaconnector-reference |
Chapter 13. Nodes | Chapter 13. Nodes 13.1. Node maintenance Nodes can be placed into maintenance mode by using the oc adm utility or NodeMaintenance custom resources (CRs). Note The node-maintenance-operator (NMO) is no longer shipped with OpenShift Virtualization. It is deployed as a standalone Operator from the OperatorHub in the OpenShift Container Platform web console or by using the OpenShift CLI ( oc ). For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. Important Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 13.1.1. Eviction strategies Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it. You can configure eviction strategies for virtual machines (VMs) or for the cluster. VM eviction strategy The VM LiveMigrate eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node. You can configure eviction strategies for virtual machines (VMs) by using the OpenShift Container Platform web console or the command line . Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Cluster eviction strategy You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade. Table 13.1. Cluster eviction strategies Eviction strategy Description Interrupts workflow Blocks upgrades LiveMigrate 1 Prioritizes workload continuity over upgrades. No Yes 2 LiveMigrateIfPossible Prioritizes upgrades over workload continuity to ensure that the environment is updated. Yes No None 3 Shuts down VMs with no eviction strategy. Yes No Default eviction strategy for multi-node clusters. If a VM blocks an upgrade, you must shut down the VM manually. Default eviction strategy for single-node OpenShift. 13.1.1.1. Configuring a VM eviction strategy using the command line You can configure an eviction strategy for a virtual machine (VM) by using the command line. Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example eviction strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1 # ... 1 Specify the eviction strategy. The default value is LiveMigrate . Restart the VM to apply the changes: USD virtctl restart <vm_name> -n <namespace> 13.1.1.2. Configuring a cluster eviction strategy by using the command line You can configure an eviction strategy for a cluster by using the command line. Procedure Edit the hyperconverged resource by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the cluster eviction strategy as shown in the following example: Example cluster eviction strategy apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate # ... 13.1.2. Run strategies The spec.runStrategy key determines how a VM behaves under certain conditions. 13.1.2.1. Run strategies The spec.runStrategy key has four possible values: Always The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. RerunOnFailure The VMI is re-created on another node if the instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down. Manual You control the VMI state manually with the start , stop , and restart virtctl client commands. The VM is not automatically restarted. Halted No VMI is present when a VM is created. Different combinations of the virtctl start , stop and restart commands affect the run strategy. The following table describes a VM's transition between states. The first column shows the VM's initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run. Table 13.2. Run strategy before and after virtctl commands Initial run strategy Start Stop Restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with runStrategy: Always or runStrategy: RerunOnFailure are rescheduled on a new node. 13.1.2.2. Configuring a VM run strategy by using the command line You can configure a run strategy for a virtual machine (VM) by using the command line. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example run strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ... 13.1.3. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 13.1.4. Additional resources About live migration 13.2. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node. 13.2.1. About node labeling for obsolete CPU models The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: Example 13.1. Obsolete CPU models This predefined list is not visible in the HyperConverged CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels field of the HyperConverged CR. 13.2.2. About node labeling for CPU features Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example: An environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , each base CPU feature for Penryn is compared to the list of CPU features supported by Haswell . Example 13.2. CPU features supported by Penryn Example 13.3. CPU features supported by Haswell If both Penryn and Haswell support a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only by Haswell and not by Penryn . Example 13.4. Node labels created for CPU features after iteration 13.2.3. Configuring obsolete CPU models You can configure a list of obsolete CPU models by editing the HyperConverged custom resource (CR). Procedure Edit the HyperConverged custom resource, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>" 2 1 Replace the example values in the cpuModels array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. 2 Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value, Penryn is used by default. 13.3. Preventing node reconciliation Use skip-node annotation to prevent the node-labeller from reconciling a node. 13.3.1. Using skip-node annotation If you want the node-labeller to skip a node, annotate that node by using the oc CLI. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Annotate the node that you want to skip by running the following command: USD oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1 1 Replace <node_name> with the name of the relevant node to skip. Reconciliation resumes on the cycle after the node annotation is removed or set to false. 13.3.2. Additional resources Managing node labeling for obsolete CPU models 13.4. Deleting a failed node to trigger virtual machine failover If a node fails and node health checks are not deployed on your cluster, virtual machines (VMs) with runStrategy: Always configured are not automatically relocated to healthy nodes. 13.4.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has runStrategy set to Always . You have installed the OpenShift CLI ( oc ). 13.4.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 13.4.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 13.4.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 13.5. Activating kernel samepage merging (KSM) OpenShift Virtualization can activate kernel samepage merging (KSM) when nodes are overloaded. KSM deduplicates identical data found in the memory pages of virtual machines (VMs). If you have very similar VMs, KSM can make it possible to schedule more VMs on a single node. Important You must only use KSM with trusted workloads. 13.5.1. Prerequisites Ensure that an administrator has configured KSM support on any nodes where you want OpenShift Virtualization to activate KSM. 13.5.2. About using OpenShift Virtualization to activate KSM You can configure OpenShift Virtualization to activate kernel samepage merging (KSM) when nodes experience memory overload. 13.5.2.1. Configuration methods You can enable or disable the KSM activation feature for all nodes by using the OpenShift Container Platform web console or by editing the HyperConverged custom resource (CR). The HyperConverged CR supports more granular configuration. CR configuration You can configure the KSM activation feature by editing the spec.configuration.ksmConfiguration stanza of the HyperConverged CR. You enable the feature and configure settings by editing the ksmConfiguration stanza. You disable the feature by deleting the ksmConfiguration stanza. You can allow OpenShift Virtualization to enable KSM on only a subset of nodes by adding node selection syntax to the ksmConfiguration.nodeLabelSelector field. Note Even if the KSM activation feature is disabled in OpenShift Virtualization, an administrator can still enable KSM on nodes that support it. 13.5.2.2. KSM node labels OpenShift Virtualization identifies nodes that are configured to support KSM and applies the following node labels: kubevirt.io/ksm-handler-managed: "false" This label is set to "true" when OpenShift Virtualization activates KSM on a node that is experiencing memory overload. This label is not set to "true" if an administrator activates KSM. kubevirt.io/ksm-enabled: "false" This label is set to "true" when KSM is activated on a node, even if OpenShift Virtualization did not activate KSM. These labels are not applied to nodes that do not support KSM. 13.5.3. Configuring KSM activation by using the web console You can allow OpenShift Virtualization to activate kernel samepage merging (KSM) on all nodes in your cluster by using the OpenShift Container Platform web console. Procedure From the side menu, click Virtualization Overview . Select the Settings tab. Select the Cluster tab. Expand Resource management . Enable or disable the feature for all nodes: Set Kernel Samepage Merging (KSM) to on. Set Kernel Samepage Merging (KSM) to off. 13.5.4. Configuring KSM activation by using the CLI You can enable or disable OpenShift Virtualization's kernel samepage merging (KSM) activation feature by editing the HyperConverged custom resource (CR). Use this method if you want OpenShift Virtualization to activate KSM on only a subset of nodes. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the ksmConfiguration stanza: To enable the KSM activation feature for all nodes, set the nodeLabelSelector value to {} . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {} # ... To enable the KSM activation feature on a subset of nodes, edit the nodeLabelSelector field. Add syntax that matches the nodes where you want OpenShift Virtualization to enable KSM. For example, the following configuration allows OpenShift Virtualization to enable KSM on nodes where both <first_example_key> and <second_example_key> are set to "true" : apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: "true" <second_example_key>: "true" # ... To disable the KSM activation feature, delete the ksmConfiguration stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: # ... Save the file. 13.5.5. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Managing kernel samepage merging in the Red Hat Enterprise Linux (RHEL) documentation | [
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1",
"virtctl restart <vm_name> -n <namespace>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate",
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always",
"\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64",
"apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc",
"aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave",
"aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2",
"oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis -A",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/nodes |
Red Hat Data Grid | Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/red-hat-data-grid |
5.2.10. /proc/filesystems | 5.2.10. /proc/filesystems This file displays a list of the file system types currently supported by the kernel. Sample output from a generic /proc/filesystems file looks similar to the following: The first column signifies whether the file system is mounted on a block device. Those beginning with nodev are not mounted on a device. The second column lists the names of the file systems supported. The mount command cycles through the file systems listed here when one is not specified as an argument. | [
"nodev sysfs nodev rootfs nodev bdev nodev proc nodev sockfs nodev binfmt_misc nodev usbfs nodev usbdevfs nodev futexfs nodev tmpfs nodev pipefs nodev eventpollfs nodev devpts ext2 nodev ramfs nodev hugetlbfs iso9660 nodev mqueue ext3 nodev rpc_pipefs nodev autofs"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-filesystems |
Chapter 15. Network-Bound Disk Encryption (NBDE) | Chapter 15. Network-Bound Disk Encryption (NBDE) 15.1. About disk encryption technology Network-Bound Disk Encryption (NBDE) allows you to encrypt root volumes of hard drives on physical and virtual machines without having to manually enter a password when restarting machines. 15.1.1. Disk encryption technology comparison To understand the merits of Network-Bound Disk Encryption (NBDE) for securing data at rest on edge servers, compare key escrow and TPM disk encryption without Clevis to NBDE on systems running Red Hat Enterprise Linux (RHEL). The following table presents some tradeoffs to consider around the threat model and the complexity of each encryption solution. Scenario Key escrow TPM disk encryption (without Clevis) NBDE Protects against single-disk theft X X X Protects against entire-server theft X X Systems can reboot independently from the network X No periodic rekeying X Key is never transmitted over a network X X Supported by OpenShift X X 15.1.1.1. Key escrow Key escrow is the traditional system for storing cryptographic keys. The key server on the network stores the encryption key for a node with an encrypted boot disk and returns it when queried. The complexities around key management, transport encryption, and authentication do not make this a reasonable choice for boot disk encryption. Although available in Red Hat Enterprise Linux (RHEL), key escrow-based disk encryption setup and management is a manual process and not suited to OpenShift Container Platform automation operations, including automated addition of nodes, and currently not supported by OpenShift Container Platform. 15.1.1.2. TPM encryption Trusted Platform Module (TPM) disk encryption is best suited for data centers or installations in remote protected locations. Full disk encryption utilities such as dm-crypt and BitLocker encrypt disks with a TPM bind key, and then store the TPM bind key in the TPM, which is attached to the motherboard of the node. The main benefit of this method is that there is no external dependency, and the node is able to decrypt its own disks at boot time without any external interaction. TPM disk encryption protects against decryption of data if the disk is stolen from the node and analyzed externally. However, for insecure locations this may not be sufficient. For example, if an attacker steals the entire node, the attacker can intercept the data when powering on the node, because the node decrypts its own disks. This applies to nodes with physical TPM2 chips as well as virtual machines with Virtual Trusted Platform Module (VTPM) access. 15.1.1.3. Network-Bound Disk Encryption (NBDE) Network-Bound Disk Encryption (NBDE) effectively ties the encryption key to an external server or set of servers in a secure and anonymous way across the network. This is not a key escrow, in that the nodes do not store the encryption key or transfer it over the network, but otherwise behaves in a similar fashion. Clevis and Tang are generic client and server components that provide network-bound encryption. Red Hat Enterprise Linux CoreOS (RHCOS) uses these components in conjunction with Linux Unified Key Setup-on-disk-format (LUKS) to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. When a node starts, it attempts to contact a predefined set of Tang servers by performing a cryptographic handshake. If it can reach the required number of Tang servers, the node can construct its disk decryption key and unlock the disks to continue booting. If the node cannot access a Tang server due to a network outage or server unavailability, the node cannot boot and continues retrying indefinitely until the Tang servers become available again. Because the key is effectively tied to the node's presence in a network, an attacker attempting to gain access to the data at rest would need to obtain both the disks on the node, and network access to the Tang server as well. The following figure illustrates the deployment model for NBDE. The following figure illustrates NBDE behavior during a reboot. 15.1.1.4. Secret sharing encryption Shamir's secret sharing (sss) is a cryptographic algorithm to securely divide up, distribute, and re-assemble keys. Using this algorithm, OpenShift Container Platform can support more complicated mixtures of key protection. When you configure a cluster node to use multiple Tang servers, OpenShift Container Platform uses sss to set up a decryption policy that will succeed if at least one of the specified servers is available. You can create layers for additional security. For example, you can define a policy where OpenShift Container Platform requires both the TPM and one of the given list of Tang servers to decrypt the disk. 15.1.2. Tang server disk encryption The following components and technologies implement Network-Bound Disk Encryption (NBDE). Tang is a server for binding data to network presence. It makes a node containing the data available when the node is bound to a certain secure network. Tang is stateless and does not require Transport Layer Security (TLS) or authentication. Unlike escrow-based solutions, where the key server stores all encryption keys and has knowledge of every encryption key, Tang never interacts with any node keys, so it never gains any identifying information from the node. Clevis is a pluggable framework for automated decryption that provides automated unlocking of Linux Unified Key Setup-on-disk-format (LUKS) volumes. The Clevis package runs on the node and provides the client side of the feature. A Clevis pin is a plugin into the Clevis framework. There are three pin types: TPM2 Binds the disk encryption to the TPM2. Tang Binds the disk encryption to a Tang server to enable NBDE. Shamir's secret sharing (sss) Allows more complex combinations of other pins. It allows more nuanced policies such as the following: Must be able to reach one of these three Tang servers Must be able to reach three of these five Tang servers Must be able to reach the TPM2 AND at least one of these three Tang servers 15.1.3. Tang server location planning When planning your Tang server environment, consider the physical and network locations of the Tang servers. Physical location The geographic location of the Tang servers is relatively unimportant, as long as they are suitably secured from unauthorized access or theft and offer the required availability and accessibility to run a critical service. Nodes with Clevis clients do not require local Tang servers as long as the Tang servers are available at all times. Disaster recovery requires both redundant power and redundant network connectivity to Tang servers regardless of their location. Network location Any node with network access to the Tang servers can decrypt their own disk partitions, or any other disks encrypted by the same Tang servers. Select network locations for the Tang servers that ensure the presence or absence of network connectivity from a given host allows for permission to decrypt. For example, firewall protections might be in place to prohibit access from any type of guest or public network, or any network jack located in an unsecured area of the building. Additionally, maintain network segregation between production and development networks. This assists in defining appropriate network locations and adds an additional layer of security. Do not deploy Tang servers on the same resource, for example, the same rolebindings.rbac.authorization.k8s.io cluster, that they are responsible for unlocking. However, a cluster of Tang servers and other security resources can be a useful configuration to enable support of multiple additional clusters and cluster resources. 15.1.4. Tang server sizing requirements The requirements around availability, network, and physical location drive the decision of how many Tang servers to use, rather than any concern over server capacity. Tang servers do not maintain the state of data encrypted using Tang resources. Tang servers are either fully independent or share only their key material, which enables them to scale well. There are two ways Tang servers handle key material: Multiple Tang servers share key material: You must load balance Tang servers sharing keys behind the same URL. The configuration can be as simple as round-robin DNS, or you can use physical load balancers. You can scale from a single Tang server to multiple Tang servers. Scaling Tang servers does not require rekeying or client reconfiguration on the node when the Tang servers share key material and the same URL. Client node setup and key rotation only requires one Tang server. Multiple Tang servers generate their own key material: You can configure multiple Tang servers at installation time. You can scale an individual Tang server behind a load balancer. All Tang servers must be available during client node setup or key rotation. When a client node boots using the default configuration, the Clevis client contacts all Tang servers. Only n Tang servers must be online to proceed with decryption. The default value for n is 1. Red Hat does not support post-installation configuration that changes the behavior of the Tang servers. 15.1.5. Logging considerations Centralized logging of Tang traffic is advantageous because it might allow you to detect such things as unexpected decryption requests. For example: A node requesting decryption of a passphrase that does not correspond to its boot sequence A node requesting decryption outside of a known maintenance activity, such as cycling keys 15.2. Tang server installation considerations 15.2.1. Installation scenarios Consider the following recommendations when planning Tang server installations: Small environments can use a single set of key material, even when using multiple Tang servers: Key rotations are easier. Tang servers can scale easily to permit high availability. Large environments can benefit from multiple sets of key material: Physically diverse installations do not require the copying and synchronizing of key material between geographic regions. Key rotations are more complex in large environments. Node installation and rekeying require network connectivity to all Tang servers. A small increase in network traffic can occur due to a booting node querying all Tang servers during decryption. Note that while only one Clevis client query must succeed, Clevis queries all Tang servers. Further complexity: Additional manual reconfiguration can permit the Shamir's secret sharing (sss) of any N of M servers online in order to decrypt the disk partition. Decrypting disks in this scenario requires multiple sets of key material, and manual management of Tang servers and nodes with Clevis clients after the initial installation. High level recommendations: For a single RAN deployment, a limited set of Tang servers can run in the corresponding domain controller (DC). For multiple RAN deployments, you must decide whether to run Tang servers in each corresponding DC or whether a global Tang environment better suits the other needs and requirements of the system. 15.2.2. Installing a Tang server Procedure You can install a Tang server on a Red Hat Enterprise Linux (RHEL) machine using either of the following commands: Install the Tang server by using the yum command: USD sudo yum install tang Install the Tang server by using the dnf command: USD sudo dnf install tang Note Installation can also be containerized and is very lightweight. 15.2.2.1. Compute requirements The computational requirements for the Tang server are very low. Any typical server grade configuration that you would use to deploy a server into production can provision sufficient compute capacity. High availability considerations are solely for availability and not additional compute power to satisfy client demands. 15.2.2.2. Automatic start at boot Due to the sensitive nature of the key material the Tang server uses, you should keep in mind that the overhead of manual intervention during the Tang server's boot sequence can be beneficial. By default, if a Tang server starts and does not have key material present in the expected local volume, it will create fresh material and serve it. You can avoid this default behavior by either starting with pre-existing key material or aborting the startup and waiting for manual intervention. 15.2.2.3. HTTP versus HTTPS Traffic to the Tang server can be encrypted (HTTPS) or plaintext (HTTP). There are no significant security advantages of encrypting this traffic, and leaving it decrypted removes any complexity or failure conditions related to Transport Layer Security (TLS) certificate checking in the node running a Clevis client. While it is possible to perform passive monitoring of unencrypted traffic between the node's Clevis client and the Tang server, the ability to use this traffic to determine the key material is at best a future theoretical concern. Any such traffic analysis would require large quantities of captured data. Key rotation would immediately invalidate it. Finally, any threat actor able to perform passive monitoring has already obtained the necessary network access to perform manual connections to the Tang server and can perform the simpler manual decryption of captured Clevis headers. However, because other network policies in place at the installation site might require traffic encryption regardless of application, consider leaving this decision to the cluster administrator. 15.2.3. Installation considerations with Network-Bound Disk Encryption Network-Bound Disk Encryption (NBDE) must be enabled when a cluster node is installed. However, you can change the disk encryption policy at any time after it was initialized at installation. Additional resources Configuring automated unlocking of encrypted volumes using policy-based decryption Official Tang server container Encrypting and mirroring disks during installation 15.3. Tang server encryption key management The cryptographic mechanism to recreate the encryption key is based on the blinded key stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node's encrypted disk, periodic rekeying is advisable. You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. 15.3.1. Backing up keys for a Tang server The Tang server uses /usr/libexec/tangd-keygen to generate new keys and stores them in the /var/db/tang directory by default. To recover the Tang server in the event of a failure, back up this directory. The keys are sensitive and because they are able to perform the boot disk decryption of all hosts that have used them, the keys must be protected accordingly. Procedure Copy the backup key from the /var/db/tang directory to the temp directory from which you can restore the key. 15.3.2. Recovering keys for a Tang server You can recover the keys for a Tang server by accessing the keys from a backup. Procedure Restore the key from your backup folder to the /var/db/tang/ directory. When the Tang server starts up, it advertises and uses these restored keys. 15.3.3. Rekeying Tang servers This procedure uses a set of three Tang servers, each with unique keys, as an example. Using redundant Tang servers reduces the chances of nodes failing to boot automatically. Rekeying a Tang server, and all associated NBDE-encrypted nodes, is a three-step procedure. Prerequisites A working Network-Bound Disk Encryption (NBDE) installation on one or more nodes. Procedure Generate a new Tang server key. Rekey all NBDE-encrypted nodes so they use the new key. Delete the old Tang server key. Note Deleting the old key before all NBDE-encrypted nodes have completed their rekeying causes those nodes to become overly dependent on any other configured Tang servers. Figure 15.1. Example workflow for rekeying a Tang server 15.3.3.1. Generating a new Tang server key Prerequisites A root shell on the Linux machine running the Tang server. To facilitate verification of the Tang server key rotation, encrypt a small test file with the old key: # echo plaintext | clevis encrypt tang '{"url":"http://localhost:7500"}' -y >/tmp/encrypted.oldkey Verify that the encryption succeeded and the file can be decrypted to produce the same string plaintext : # clevis decrypt </tmp/encrypted.oldkey Procedure Locate and access the directory that stores the Tang server key. This is usually the /var/db/tang directory. Check the currently advertised key thumbprint: # tang-show-keys 7500 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8 Enter the Tang server key directory: # cd /var/db/tang/ List the current Tang server keys: # ls -A1 Example output 36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk During normal Tang server operations, there are two .jwk files in this directory: one for signing and verification, and another for key derivation. Disable advertisement of the old keys: # for key in *.jwk; do \ mv -- "USDkey" ".USDkey"; \ done New clients setting up Network-Bound Disk Encryption (NBDE) or requesting keys will no longer see the old keys. Existing clients can still access and use the old keys until they are deleted. The Tang server reads but does not advertise keys stored in UNIX hidden files, which start with the . character. Generate a new key: # /usr/libexec/tangd-keygen /var/db/tang List the current Tang server keys to verify the old keys are no longer advertised, as they are now hidden files, and new keys are present: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Tang automatically advertises the new keys. Note More recent Tang server installations include a helper /usr/libexec/tangd-rotate-keys directory that takes care of disabling advertisement and generating the new keys simultaneously. If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made here are properly synchronized across the entire set of servers before proceeding. Verification Verify that the Tang server is advertising the new key, and not advertising the old key: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Verify that the old key, while not advertised, is still available to decryption requests: # clevis decrypt </tmp/encrypted.oldkey 15.3.3.2. Rekeying all NBDE nodes You can rekey all of the nodes on a remote cluster by using a DaemonSet object without incurring any downtime to the remote cluster. Note If a node loses power during the rekeying, it is possible that it might become unbootable, and must be redeployed via Red Hat Advanced Cluster Management (RHACM) or a GitOps pipeline. Prerequisites cluster-admin access to all clusters with Network-Bound Disk Encryption (NBDE) nodes. All Tang servers must be accessible to every NBDE node undergoing rekeying, even if the keys of a Tang server have not changed. Obtain the Tang server URL and key thumbprint for every Tang server. Procedure Create a DaemonSet object based on the following template. This template sets up three redundant Tang servers, but can be easily adapted to other situations. Change the Tang server URLs and thumbprints in the NEW_TANG_PIN environment to suit your environment: apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi8/ubi-minimal:8.4 imagePullPolicy: IfNotPresent command: - "/sbin/chroot" - "/host" - "/bin/bash" - "-ec" args: - | rm -f /tmp/rekey-complete || true echo "Current tang pin:" clevis-luks-list -d USDROOT_DEV -s 1 echo "Applying new tang pin: USDNEW_TANG_PIN" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c "USDNEW_TANG_PIN" echo "Pin applied successfully" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon In this case, even though you are rekeying tangserver01 , you must specify not only the new thumbprint for tangserver01 , but also the current thumbprints for all other Tang servers. Failure to specify all thumbprints for a rekeying operation opens up the opportunity for a man-in-the-middle attack. To distribute the daemon set to every cluster that must be rekeyed, run the following command: USD oc apply -f tang-rekey.yaml However, to run at scale, wrap the daemon set in an ACM policy. This ACM configuration must contain one policy to deploy the daemon set, a second policy to check that all the daemon set pods are READY, and a placement rule to apply it to the appropriate set of clusters. Note After validating that the daemon set has successfully rekeyed all servers, delete the daemon set. If you do not delete the daemon set, it must be deleted before the rekeying operation. Verification After you distribute the daemon set, monitor the daemon sets to ensure that the rekeying has completed successfully. The script in the example daemon set terminates with an error if the rekeying failed, and remains in the CURRENT state if successful. There is also a readiness probe that marks the pod as READY when the rekeying has completed successfully. This is an example of the output listing for the daemon set before the rekeying has completed: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s This is an example of the output listing for the daemon set after the rekeying has completed successfully: USD oc get -n openshift-machine-config-operator ds tang-rekey Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h Rekeying usually takes a few minutes to complete. Note If you use ACM policies to distribute the daemon sets to multiple clusters, you must include a compliance policy that checks every daemon set's READY count is equal to the DESIRED count. In this way, compliance to such a policy demonstrates that all daemon set pods are READY and the rekeying has completed successfully. You could also use an ACM search to query all of the daemon sets' states. 15.3.3.3. Troubleshooting temporary rekeying errors for Tang servers To determine if the error condition from rekeying the Tang servers is temporary, perform the following procedure. Temporary error conditions might include: Temporary network outages Tang server maintenance Generally, when these types of temporary error conditions occur, you can wait until the daemon set succeeds in resolving the error or you can delete the daemon set and not try again until the temporary error condition has been resolved. Procedure Restart the pod that performs the rekeying operation using the normal Kubernetes pod restart policy. If any of the associated Tang servers are unavailable, try rekeying until all the servers are back online. 15.3.3.4. Troubleshooting permanent rekeying errors for Tang servers If, after rekeying the Tang servers, the READY count does not equal the DESIRED count after an extended period of time, it might indicate a permanent failure condition. In this case, the following conditions might apply: A typographical error in the Tang server URL or thumbprint in the NEW_TANG_PIN definition. The Tang server is decommissioned or the keys are permanently lost. Prerequisites The commands shown in this procedure can be run on the Tang server or on any Linux system that has network access to the Tang server. Procedure Validate the Tang server configuration by performing a simple encrypt and decrypt operation on each Tang server's configuration as defined in the daemon set. This is an example of an encryption and decryption attempt with a bad thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver02:7500","thp":"badthumbprint"}' | \ clevis decrypt Example output Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'! This is an example of an encryption and decryption attempt with a good thumbprint: USD echo "okay" | clevis encrypt tang \ '{"url":"http://tangserver03:7500","thp":"goodthumbprint"}' | \ clevis decrypt Example output okay After you identify the root cause, remedy the underlying situation: Delete the non-working daemon set. Edit the daemon set definition to fix the underlying issue. This might include any of the following actions: Edit a Tang server entry to correct the URL and thumbprint. Remove a Tang server that is no longer in service. Add a new Tang server that is a replacement for a decommissioned server. Distribute the updated daemon set again. Note When replacing, removing, or adding a Tang server from a configuration, the rekeying operation will succeed as long as at least one original server is still functional, including the server currently being rekeyed. If none of the original Tang servers are functional or can be recovered, recovery of the system is impossible and you must redeploy the affected nodes. Verification Check the logs from each pod in the daemon set to determine whether the rekeying completed successfully. If the rekeying is not successful, the logs might indicate the failure condition. Locate the name of the container that was created by the daemon set: USD oc get pods -A | grep tang-rekey Example output openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m Print the logs from the container. The following log is from a completed successful rekeying operation: USD oc logs tang-rekey-7ks6h Example output Current tang pin: 1: sss '{"t":1,"pins":{"tang":[{"url":"http://10.46.55.192:7500"},{"url":"http://10.46.55.192:7501"},{"url":"http://10.46.55.192:7502"}]}}' Applying new tang pin: {"t":1,"pins":{"tang":[ {"url":"http://tangserver01:7500","thp":"WOjQYkyK7DxY_T5pMncMO5w0f6E"}, {"url":"http://tangserver02:7500","thp":"I5Ynh2JefoAO3tNH9TgI4obIaXI"}, {"url":"http://tangserver03:7500","thp":"38qWZVeDKzCPG9pHLqKzs6k1ons"} ]}} Updating binding... Binding edited successfully Pin applied successfully 15.3.4. Deleting old Tang server keys Prerequisites A root shell on the Linux machine running the Tang server. Procedure Locate and access the directory where the Tang server key is stored. This is usually the /var/db/tang directory: # cd /var/db/tang/ List the current Tang server keys, showing the advertised and unadvertised keys: # ls -A1 Example output .36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Delete the old keys: # rm .*.jwk List the current Tang server keys to verify the unadvertised keys are no longer present: # ls -A1 Example output Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk Verification At this point, the server still advertises the new keys, but an attempt to decrypt based on the old key will fail. Query the Tang server for the current advertised key thumbprints: # tang-show-keys 7500 Example output WOjQYkyK7DxY_T5pMncMO5w0f6E Decrypt the test file created earlier to verify decryption against the old keys fails: # clevis decrypt </tmp/encryptValidation Example output Error communicating with the server! If you are running multiple Tang servers behind a load balancer that share the same key material, ensure the changes made are properly synchronized across the entire set of servers before proceeding. 15.4. Disaster recovery considerations This section describes several potential disaster situations and the procedures to respond to each of them. Additional situations will be added here as they are discovered or presumed likely to be possible. 15.4.1. Loss of a client machine The loss of a cluster node that uses the Tang server to decrypt its disk partition is not a disaster. Whether the machine was stolen, suffered hardware failure, or another loss scenario is not important: the disks are encrypted and considered unrecoverable. However, in the event of theft, a precautionary rotation of the Tang server's keys and rekeying of all remaining nodes would be prudent to ensure the disks remain unrecoverable even in the event the thieves subsequently gain access to the Tang servers. To recover from this situation, either reinstall or replace the node. 15.4.2. Planning for a loss of client network connectivity The loss of network connectivity to an individual node will cause it to become unable to boot in an unattended fashion. If you are planning work that might cause a loss of network connectivity, you can reveal the passphrase for an onsite technician to use manually, and then rotate the keys afterwards to invalidate it: Procedure Before the network becomes unavailable, show the password used in the first slot -s 1 of device /dev/vda2 with this command: USD sudo clevis luks pass -d /dev/vda2 -s 1 Invalidate that value and regenerate a new random boot-time passphrase with this command: USD sudo clevis luks regen -d /dev/vda2 -s 1 15.4.3. Unexpected loss of network connectivity If the network disruption is unexpected and a node reboots, consider the following scenarios: If any nodes are still online, ensure that they do not reboot until network connectivity is restored. This is not applicable for single-node clusters. The node will remain offline until such time that either network connectivity is restored, or a pre-established passphrase is entered manually at the console. In exceptional circumstances, network administrators might be able to reconfigure network segments to reestablish access, but this is counter to the intent of NBDE, which is that lack of network access means lack of ability to boot. The lack of network access at the node can reasonably be expected to impact that node's ability to function as well as its ability to boot. Even if the node were to boot via manual intervention, the lack of network access would make it effectively useless. 15.4.4. Recovering network connectivity manually A somewhat complex and manually intensive process is also available to the onsite technician for network recovery. Procedure The onsite technician extracts the Clevis header from the hard disks. Depending on BIOS lockdown, this might involve removing the disks and installing them in a lab machine. The onsite technician transmits the Clevis headers to a colleague with legitimate access to the Tang network who then performs the decryption. Due to the necessity of limited access to the Tang network, the technician should not be able to access that network via VPN or other remote connectivity. Similarly, the technician cannot patch the remote server through to this network in order to decrypt the disks automatically. The technician reinstalls the disk and manually enters the plain text passphrase provided by their colleague. The machine successfully starts even without direct access to the Tang servers. Note that the transmission of the key material from the install site to another site with network access must be done carefully. When network connectivity is restored, the technician rotates the encryption keys. 15.4.5. Emergency recovery of network connectivity If you are unable to recover network connectivity manually, consider the following steps. Be aware that these steps are discouraged if other methods to recover network connectivity are available. This method must only be performed by a highly trusted technician. Taking the Tang server's key material to the remote site is considered to be a breach of the key material and all servers must be rekeyed and re-encrypted. This method must be used in extreme cases only, or as a proof of concept recovery method to demonstrate its viability. Equally extreme, but theoretically possible, is to power the server in question with an Uninterruptible Power Supply (UPS), transport the server to a location with network connectivity to boot and decrypt the disks, and then restore the server at the original location on battery power to continue operation. If you want to use a backup manual passphrase, you must create it before the failure situation occurs. Just as attack scenarios become more complex with TPM and Tang compared to a stand-alone Tang installation, so emergency disaster recovery processes are also made more complex if leveraging the same method. 15.4.6. Loss of a network segment The loss of a network segment, making a Tang server temporarily unavailable, has the following consequences: OpenShift Container Platform nodes continue to boot as normal, provided other servers are available. New nodes cannot establish their encryption keys until the network segment is restored. In this case, ensure connectivity to remote geographic locations for the purposes of high availability and redundancy. This is because when you are installing a new node or rekeying an existing node, all of the Tang servers you are referencing in that operation must be available. A hybrid model for a vastly diverse network, such as five geographic regions in which each client is connected to the closest three clients is worth investigating. In this scenario, new clients are able to establish their encryption keys with the subset of servers that are reachable. For example, in the set of tang1 , tang2 and tang3 servers, if tang2 becomes unreachable clients can still establish their encryption keys with tang1 and tang3 , and at a later time re-establish with the full set. This can involve either a manual intervention or a more complex automation to be available. 15.4.7. Loss of a Tang server The loss of an individual Tang server within a load balanced set of servers with identical key material is completely transparent to the clients. The temporary failure of all Tang servers associated with the same URL, that is, the entire load balanced set, can be considered the same as the loss of a network segment. Existing clients have the ability to decrypt their disk partitions so long as another preconfigured Tang server is available. New clients cannot enroll until at least one of these servers comes back online. You can mitigate the physical loss of a Tang server by either reinstalling the server or restoring the server from backups. Ensure that the backup and restore processes of the key material is adequately protected from unauthorized access. 15.4.8. Rekeying compromised key material If key material is potentially exposed to unauthorized third parties, such as through the physical theft of a Tang server or associated data, immediately rotate the keys. Procedure Rekey any Tang server holding the affected material. Rekey all clients using the Tang server. Destroy the original key material. Scrutinize any incidents that result in unintended exposure of the master encryption key. If possible, take compromised nodes offline and re-encrypt their disks. Tip Reformatting and reinstalling on the same physical hardware, although slow, is easy to automate and test. | [
"sudo yum install tang",
"sudo dnf install tang",
"echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey",
"clevis decrypt </tmp/encrypted.oldkey",
"tang-show-keys 7500",
"36AHjNH3NZDSnlONLz1-V4ie6t8",
"cd /var/db/tang/",
"ls -A1",
"36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk",
"for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done",
"/usr/libexec/tangd-keygen /var/db/tang",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encrypted.oldkey",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi8/ubi-minimal:8.4 imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon",
"oc apply -f tang-rekey.yaml",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt",
"Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt",
"okay",
"oc get pods -A | grep tang-rekey",
"openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m",
"oc logs tang-rekey-7ks6h",
"Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully",
"cd /var/db/tang/",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"rm .*.jwk",
"ls -A1",
"Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encryptValidation",
"Error communicating with the server!",
"sudo clevis luks pass -d /dev/vda2 -s 1",
"sudo clevis luks regen -d /dev/vda2 -s 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/network-bound-disk-encryption-nbde |
15.4. RemoteCacheStore Parameters for Rolling Upgrades | 15.4. RemoteCacheStore Parameters for Rolling Upgrades 15.4.1. rawValues and RemoteCacheStore By default, the RemoteCacheStore store's values are wrapped in InternalCacheEntry. Enabling the rawValues parameter causes the raw values to be stored instead for interoperability with direct access by RemoteCacheManagers. rawValues must be enabled in order to interact with a Hot Rod cache via both RemoteCacheStore and RemoteCacheManager. Report a bug 15.4.2. hotRodWrapping The hotRodWrapping parameter is a shortcut that enables rawValues and sets an appropriate marshaller and entry wrapper for performing Rolling Upgrades. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-remotecachestore_parameters_for_rolling_upgrades |
5.6.3. User Accounts | 5.6.3. User Accounts Because FTP passes unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts. To disable user accounts in vsftpd , add the following directive to /etc/vsftpd/vsftpd.conf : 5.6.3.1. Restricting User Accounts The easiest way to disable a specific group of accounts, such as the root user and those with sudo privileges, from accessing an FTP server is to use a PAM list file as described in Section 4.4.1, "Allowing Root Access" . The PAM configuration file for vsftpd is /etc/pam.d/vsftpd . It is also possible to disable user accounts within each service directly. To disable specific user accounts in vsftpd , add the username to /etc/vsftpd.ftpusers . | [
"local_enable=NO"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-ftp-denylocal |
7.41. dracut | 7.41. dracut 7.41.1. RHBA-2015:1328 - dracut bug fix and enhancement update Updated dracut packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The dracut packages include an event-driven initramfs generator infrastructure based on the udev device manager. The virtual file system, initramfs, is loaded together with the kernel at boot time and initializes the system, so it can read and boot from the root partition. Bug Fixes BZ# 1198117 Previously, the dracut utility incorrectly printed an error message if the /tmp/net.USDnetif.override file did not exist. With this update, dracut verifies whether /tmp/net.USDnetif.override exists before it attempts to read it, which prevents the described error from occurring. BZ# 1005886 Prior to this update, the dracut logrotate configuration determined that the "time" option had priority over the "size" option. Consequently, the dracut logs were rotated only yearly regardless of their size. This update removes the "time" option of the logrotate configuration, and the dracut logs now rotate when the size exceeds 1 MB. BZ# 1069275 If "ip=ibft" was specified as a kernel command-line argument, but the "ifname=<iface>:<mac>" parameter was not, dracut did not handle network interfaces correctly. As a consequence, iSCSI disks were not connected to the system, and thus the system failed to boot. With this update, dracut handles "ip=ibft" as a kernel command-line argument, even without "ifname=<iface>:<mac>", and iSCSI disks are now connected to the system successfully resulting in successful system boot. BZ# 1085562 If the /etc/crypttab file did not contain a new line as the last character, dracut failed to parse the last line of the file, and the encrypted disk could not be unlocked. This update fixes dracut to handle /etc/crypttab without a new line at the end, and the encrypted disk specified on the last line is now handled as expected, requesting a password and unlocking the disk. BZ# 1130565 If the /etc/lvm/lvm.conf file had host tags defined, the initramfs virtual file system did not insert the /etc/lvm/lvm_hostname.conf file during kernel upgrade, which previously led to a boot failure. This update adds /etc/lvm/lvm_hostname.conf along with /etc/lvm/lvm.conf, and the system now boots with host tags as intended. BZ# 1176671 Previously, dracut did not parse the kernel command line correctly for some iSCSI parameters, which led to iSCSI disks not being connected. With this update, dracut parses the kernel command-line parameters for iSCSI correctly, and iSCSI disks are now connected successfully. BZ# 1184142 Due to an internal change in the nss-softokn-freebl package, dracut could not build an initramfs file in FIPS mode. To fix this bug, nss-softokn-freebl delivers its own dracut module and dracut now requires nss-softokn-freebl as a dependency. As a result, dracut can build FIPS-enabled initramfs with all files. BZ# 1191721 When network parameters were specified on the kernel command line, dracut only attempted to connect to iSCSI targets provided the network could be brought up. Consequently, for misconfigured networks, iSCSI firmware settings or iSCSI offload connections were not explored. To fix this bug, dracut now attempts to connect to the iSCSI targets even if after a certain timeout no network connection can be brought up. As a result, iSCSI targets can be connected even for misconfigured kernel command-line network parameters. BZ# 1193528 Due to changes in FIPS requirements, a new deterministic random-byte generator (drbg) was added to the kernel for FIPS purposes. With this update, dracut loads drbg as other kernel modules in FIPS mode. Enhancements BZ# 1111358 With this update, dracut can boot from iSCSI on a network with VLANs configured, where the VLAN settings are stored in the iBFT BIOS. BZ# 1226905 LVM thin volumes are now supported in initramfs. Users of dracut are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-dracut |
Chapter 17. Using the Red Hat Quay API | Chapter 17. Using the Red Hat Quay API Red Hat Quay provides a full OAuth 2 , RESTful API. [OAuth 2] RESTful API provides the following benefits: It is available from endpoint /api/v1 endpoint of your Red Hat Quay host. For example, https://<quay-server.example.com>/api/v1 . It allows users to connect to endpoints through their browser to GET , POST , DELETE , and PUT Red Hat Quay settings by enabling the Swagger UI. It can be accessed by applications that make API calls and use OAuth tokens. It sends and receives data as JSON. The following section describes how to access the Red Hat Quay API so that it can be used with your deployment. 17.1. Accessing the Quay API from Quay.io If you don't have your own Red Hat Quay cluster running yet, you can explore the Red Hat Quay API available from Quay.io from your web browser: The API Explorer that appears shows Quay.io API endpoints. You will not see superuser API endpoints or endpoints for Red Hat Quay features that are not enabled on Quay.io (such as Repository Mirroring). From API Explorer, you can get, and sometimes change, information on: Billing, subscriptions, and plans Repository builds and build triggers Error messages and global messages Repository images, manifests, permissions, notifications, vulnerabilities, and image signing Usage logs Organizations, members and OAuth applications User and robot accounts and more... Select to open an endpoint to view the Model Schema for each part of the endpoint. Open an endpoint, enter any required parameters (such as a repository name or image), then select the Try it out! button to query or change settings associated with a Quay.io endpoint. 17.2. Creating a v1 OAuth access token OAuth access tokens are credentials that allow you to access protected resources in a secure manner. With Red Hat Quay, you must create an OAuth access token before you can access the API endpoints of your organization. Use the following procedure to create an OAuth access token. Prerequisites You have logged in to Red Hat Quay as an administrator. Procedure On the main page, select an Organization. In the navigation pane, select Applications . Click Create New Application and provide a new application name, then press Enter . On the OAuth Applications page, select the name of your application. Optional. Enter the following information: Application Name Homepage URL Description Avatar E-mail Redirect/Callback URL prefix In the navigation pane, select Generate Token . Check the boxes for the following options: Administer Organization Administer Repositories Create Repositories View all visible repositories Read/Write to any accessible repositories Super User Access Administer User Read User Information Click Generate Access Token . You are redirected to a new page. Review the permissions that you are allowing, then click Authorize Application . Confirm your decision by clicking Authorize Application . You are redirected to the Access Token page. Copy and save the access token. Important This is the only opportunity to copy and save the access token. It cannot be reobtained after leaving this page. 17.3. Creating an OCI referrers OAuth access token In some cases, you might want to create an OCI referrers OAuth access token. This token is used to list OCI referrers of a manifest under a repository. Procedure Update your config.yaml file to include the FEATURE_REFERRERS_API: true field. For example: # ... FEATURE_REFERRERS_API: true # ... Enter the following command to Base64 encode your credentials: USD echo -n '<username>:<password>' | base64 Example output abcdeWFkbWluOjE5ODlraWROZXQxIQ== Enter the following command to use the base64 encoded string and modify the URL endpoint to your Red Hat Quay server: USD curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq Example output { "token": "<example_secret> } 17.4. Reassigning an OAuth access token Organization administrators can assign OAuth API tokens to be created by other user's with specific permissions. This allows the audit logs to be reflected accurately when the token is used by a user that has no organization administrative permissions to create an OAuth API token. Note The following procedure only works on the current Red Hat Quay UI. It is not currently implemented in the Red Hat Quay v2 UI. Prerequisites You are logged in as a user with organization administrative privileges, which allows you to assign an OAuth API token. Note OAuth API tokens are used for authentication and not authorization. For example, the user that you are assigning the OAuth token to must have the Admin team role to use administrative API endpoints. For more information, see Managing access to repositories . Procedure Optional. If not already, update your Red Hat Quay config.yaml file to include the FEATURE_ASSIGN_OAUTH_TOKEN: true field: # ... FEATURE_ASSIGN_OAUTH_TOKEN: true # ... Optional. Restart your Red Hat Quay registry. Log in to your Red Hat Quay registry as an organization administrator. Click the name of the organization in which you created the OAuth token for. In the navigation pane, click Applications . Click the proper application name. In the navigation pane, click Generate Token . Click Assign another user and enter the name of the user that will take over the OAuth token. Check the boxes for the desired permissions that you want the new user to have. For example, if you only want the new user to be able to create repositories, click Create Repositories . Important Permission control is defined by the team role within an organization and must be configured regardless of the options selected here. For example, the user that you are assigning the OAuth token to must have the Admin team role to use administrative API endpoints. Solely checking the Super User Access box does not actually grant the user this permission. Superusers must be configured via the config.yaml file and the box must be checked here. Click Assign token . A popup box appears that confirms authorization with the following message and shows you the approved permissions: This will prompt user <username> to generate a token with the following permissions: repo:create Click Assign token in the popup box. You are redirected to a new page that displays the following message: Token assigned successfully Verification After reassigning an OAuth token, the assigned user must accept the token to receive the bearer token, which is required to use API endpoints. Request that the assigned user logs into the Red Hat Quay registry. After they have logged in, they must click their username under Users and Organizations . In the navigation pane, they must click External Logins And Applications . Under Authorized Applications , they must confirm the application by clicking Authorize Application . They are directed to a new page where they must reconfirm by clicking Authorize Application . They are redirected to a new page that reveals their bearer token. They must save this bearer token, as it cannot be viewed again. 17.5. Accessing your Quay API from a web browser By enabling Swagger, you can access the API for your own Red Hat Quay instance through a web browser. This URL exposes the Red Hat Quay API explorer via the Swagger UI and this URL: That way of accessing the API does not include superuser endpoints that are available on Red Hat Quay installations. Here is an example of accessing a Red Hat Quay API interface running on the local system by running the swagger-ui container image: With the swagger-ui container running, open your web browser to localhost port 8888 to view API endpoints via the swagger-ui container. To avoid errors in the log such as "API calls must be invoked with an X-Requested-With header if called from a browser," add the following line to the config.yaml on all nodes in the cluster and restart Red Hat Quay: 17.6. Accessing the Red Hat Quay API from the command line You can use the curl command to GET, PUT, POST, or DELETE settings via the API for your Red Hat Quay cluster. Replace <token> with the OAuth access token you created earlier to get or change settings in the following examples. | [
"https://docs.quay.io/api/swagger/",
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"<example_secret> }",
"FEATURE_ASSIGN_OAUTH_TOKEN: true",
"This will prompt user <username> to generate a token with the following permissions: repo:create",
"Token assigned successfully",
"https://<yourquayhost>/api/v1/discovery.",
"export SERVER_HOSTNAME=<yourhostname> sudo podman run -p 8888:8080 -e API_URL=https://USDSERVER_HOSTNAME:8443/api/v1/discovery docker.io/swaggerapi/swagger-ui",
"BROWSER_API_CALLS_XHR_ONLY: false"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/using-the-api |
Chapter 4. Integrated applications | Chapter 4. Integrated applications Cryostat integrates with specific applications that can enhance how you analyze data from your JFR recording. 4.1. Viewing a JFR recording on Grafana Cryostat 3.0 integrates with the Grafana application, so you can plot JFR recording data in Grafana. You can view plot data in time interval sections to precisely analyze the performance of your target JVM application. Prerequisites Entered your authentication details for your Cryostat instance. Created a JFR recording. See Creating a JFR recording in the Cryostat web console . Procedure Go to the Recordings menu or the Archives menu on your Cryostat instance. Depending on your needs, click either the Active Recordings tab or the Archived Recordings tab. Locate your JFR recording and then select the overflow menu. Figure 4.1. Overflow menu items available for an example JFR recording From the overflow menu, click the View in Grafana option. The Grafana application opens in a new web browser window. Enter your Red Hat OpenShift credentials in the Grafana web console login page, if prompted. A dashboard window opens and shows your JFR recording's data in various time-series plots. Optional: Interact with any plot by selecting a time-series segment on the plot. Grafana expands the on-screen data to show only the data for that time interval. Figure 4.2. Example of a Grafana dashboard with plotted graphs Revised on 2024-07-02 13:35:46 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_cryostat_to_manage_a_jfr_recording/assembly_integrated-applications_assembly_event-templates |
3.3. Creating Control Groups | 3.3. Creating Control Groups Use the cgcreate command to create transient cgroups in hierarchies you created yourself. The syntax for cgcreate is: where: -t (optional) - specifies a user (by user ID, uid) and a group (by group ID, gid) to own the tasks pseudo-file for this cgroup. This user can add tasks to the cgroup. Note Note that the only way to remove a process from a cgroup is to move it to a different cgroup. To be able to move a process, the user has to have write access to the destination cgroup; write access to the source cgroup is not necessary. -a (optional) - specifies a user (by user ID, uid) and a group (by group ID, gid) to own all pseudo-files other than tasks for this cgroup. This user can modify the access to system resources for tasks in this cgroup. -g - specifies the hierarchy in which the cgroup should be created, as a comma-separated list of the controllers associated with hierarchies. The list of controllers is followed by a colon and the path to the child group relative to the hierarchy. Do not include the hierarchy mount point in the path. Because all cgroups in the same hierarchy have the same controllers, the child group has the same controllers as its parent. As an alternative, you can create a child of the cgroup directly. To do so, use the mkdir command: For example: | [
"cgcreate -t uid : gid -a uid : gid -g controllers : path",
"~]# mkdir /sys/fs/cgroup/ controller / name / child_name",
"~]# mkdir /sys/fs/cgroup/net_prio/lab1/group1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-creating_cgroups-libcgroup |
Chapter 12. Restoring the monitor pods in OpenShift Data Foundation | Chapter 12. Restoring the monitor pods in OpenShift Data Foundation Restore the monitor pods if all three of them go down, and when OpenShift Data Foundation is not able to recover the monitor pods automatically. Note This is a disaster recovery procedure and must be performed under the guidance of the Red Hat support team. Contact Red Hat support team on, Red Hat support . Procedure Scale down the rook-ceph-operator and ocs operator deployments. Create a backup of all deployments in openshift-storage namespace. Patch the Object Storage Device (OSD) deployments to remove the livenessProbe parameter, and run it with the command parameter as sleep . Retrieve the monstore cluster map from all the OSDs. Create the recover_mon.sh script. Run the recover_mon.sh script. Patch the MON deployments, and run it with the command parameter as sleep . Edit the MON deployments. Patch the MON deployments to increase the initialDelaySeconds . Copy the previously retrieved monstore to the mon-a pod. Navigate into the MON pod and change the ownership of the retrieved monstore . Copy the keyring template file before rebuilding the mon db . Identify the keyring of all other Ceph daemons (MGR, MDS, RGW, Crash, CSI and CSI provisioners) from its respective secrets. Example keyring file, /etc/ceph/ceph.client.admin.keyring : Important For client.csi related keyring, refer to the keyring file output and add the default caps after fetching the key from its respective OpenShift Data Foundation secret. OSD keyring is added automatically post recovery. Navigate into the mon-a pod, and verify that the monstore has a monmap . Navigate into the mon-a pod. Verify that the monstore has a monmap . Optional: If the monmap is missing then create a new monmap . <mon-a-id> Is the ID of the mon-a pod. <mon-a-ip> Is the IP address of the mon-a pod. <mon-b-id> Is the ID of the mon-b pod. <mon-b-ip> Is the IP address of the mon-b pod. <mon-c-id> Is the ID of the mon-c pod. <mon-c-ip> Is the IP address of the mon-c pod. <fsid> Is the file system ID. Verify the monmap . Import the monmap . Important Use the previously created keyring file. Create a backup of the old store.db file. Copy the rebuild store.db file to the monstore directory. After rebuilding the monstore directory, copy the store.db file from local to the rest of the MON pods. <id> Is the ID of the MON pod Navigate into the rest of the MON pods and change the ownership of the copied monstore . <id> Is the ID of the MON pod Revert the patched changes. For MON deployments: <mon-deployment.yaml> Is the MON deployment yaml file For OSD deployments: <osd-deployment.yaml> Is the OSD deployment yaml file For MGR deployments: <mgr-deployment.yaml> Is the MGR deployment yaml file Important Ensure that the MON, MGR and OSD pods are up and running. Scale up the rook-ceph-operator and ocs-operator deployments. Verification steps Check the Ceph status to confirm that CephFS is running. Example output: Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in Ready state. Important If the MCG is not in the active state, and the backingstore and bucketclass not in the Ready state, you need to restart all the MCG related pods. For more information, see Section 12.1, "Restoring the Multicloud Object Gateway" . 12.1. Restoring the Multicloud Object Gateway If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running. Procedure Restart all the pods related to the MCG. <noobaa-operator> Is the name of the MCG operator <noobaa-core> Is the name of the MCG core pod <noobaa-endpoint> Is the name of the MCG endpoint <noobaa-db> Is the name of the MCG db pod If the RADOS Object Gateway (RGW) is configured, restart the pod. <rgw-pod> Is the name of the RGW pod Note In OpenShift Container Platform 4.11, after the recovery, RBD PVC fails to get mounted on the application pods. Hence, you need to restart the node that is hosting the application pods. To get the node name that is hosting the application pod, run the following command: | [
"oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage",
"oc scale deployment ocs-operator --replicas=0 -n openshift-storage",
"mkdir backup",
"cd backup",
"oc project openshift-storage",
"for d in USD(oc get deployment|awk -F' ' '{print USD1}'|grep -v NAME); do echo USDd;oc get deployment USDd -o yaml > oc_get_deployment.USD{d}.yaml; done",
"for i in USD(oc get deployment -l app=rook-ceph-osd -oname);do oc patch USD{i} -n openshift-storage --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' ; oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"osd\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}' ; done",
"#!/bin/bash ms=/tmp/monstore rm -rf USDms mkdir USDms for osd_pod in USD(oc get po -l app=rook-ceph-osd -oname -n openshift-storage); do echo \"Starting with pod: USDosd_pod\" podname=USD(echo USDosd_pod|sed 's/pod\\///g') oc exec USDosd_pod -- rm -rf USDms oc cp USDms USDpodname:USDms rm -rf USDms mkdir USDms echo \"pod in loop: USDosd_pod ; done deleting local dirs\" oc exec USDosd_pod -- ceph-objectstore-tool --type bluestore --data-path /var/lib/ceph/osd/ceph-USD(oc get USDosd_pod -ojsonpath='{ .metadata.labels.ceph_daemon_id }') --op update-mon-db --no-mon-config --mon-store-path USDms echo \"Done with COT on pod: USDosd_pod\" oc cp USDpodname:USDms USDms echo \"Finished pulling COT data from pod: USDosd_pod\" done",
"chmod +x recover_mon.sh",
"./recover_mon.sh",
"for i in USD(oc get deployment -l app=rook-ceph-mon -oname);do oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'; done",
"oc get deployment rook-ceph-mon-a -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc get deployment rook-ceph-mon-b -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc get deployment rook-ceph-mon-c -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc cp /tmp/monstore/ USD(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\\///g'):/tmp/",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"chown -R ceph:ceph /tmp/monstore",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"cp /etc/ceph/keyring-store/keyring /tmp/keyring",
"cat /tmp/keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"oc get secret rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-keyring -ojson | jq .data.keyring | xargs echo | base64 -d [mds.ocs-storagecluster-cephfilesystem-a] key = AQB3r8VgAtr6OhAAVhhXpNKqRTuEVdRoxG4uRA== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\"",
"[mon.] key = AQDxTF1hNgLTNxAAi51cCojs01b4I5E6v2H8Uw== caps mon = \"allow \" [client.admin] key = AQDxTF1hpzguOxAA0sS8nN4udoO35OEbt3bqMQ== caps mds = \"allow \" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-a] key = AQCKTV1horgjARAA8aF/BDh/4+eG4RCNBCl+aw== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-b] key = AQCKTV1hN4gKLBAA5emIVq3ncV7AMEM1c1RmGA== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [client.rgw.ocs.storagecluster.cephobjectstore.a] key = AQCOkdBixmpiAxAA4X7zjn6SGTI9c1MBflszYA== caps mon = \"allow rw\" caps osd = \"allow rwx\" [mgr.a] key = AQBOTV1hGYOEORAA87471+eIZLZtptfkcHvTRg== caps mds = \"allow *\" caps mon = \"allow profile mgr\" caps osd = \"allow *\" [client.crash] key = AQBOTV1htO1aGRAAe2MPYcGdiAT+Oo4CNPSF1g== caps mgr = \"allow rw\" caps mon = \"allow profile crash\" [client.csi-cephfs-node] key = AQBOTV1hiAtuBBAAaPPBVgh1AqZJlDeHWdoFLw== caps mds = \"allow rw\" caps mgr = \"allow rw\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs *=\" [client.csi-cephfs-provisioner] key = AQBNTV1hHu6wMBAAzNXZv36aZJuE1iz7S7GfeQ== caps mgr = \"allow rw\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs metadata= \" [client.csi-rbd-node] key = AQBNTV1h+LnkIRAAWnpIN9bUAmSHOvJ0EJXHRw== caps mgr = \"allow rw\" caps mon = \"profile rbd\" caps osd = \"profile rbd\" [client.csi-rbd-provisioner] key = AQBNTV1hMNcsExAAvA3gHB2qaY33LOdWCvHG/A== caps mgr = \"allow rw\" caps mon = \"profile rbd\" caps osd = \"profile rbd\"",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap",
"monmaptool /tmp/monmap --print",
"monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>",
"monmaptool /root/monmap --print",
"ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap",
"chown -R ceph:ceph /tmp/monstore",
"mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted",
"mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db",
"oc cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db",
"oc cp /tmp/store.db USD(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph- <id>",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon= <id> -oname)",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph- <id> /store.db",
"oc replace --force -f <mon-deployment.yaml>",
"oc replace --force -f <osd-deployment.yaml>",
"oc replace --force -f <mgr-deployment.yaml>",
"oc -n openshift-storage scale deployment ocs-operator --replicas=1",
"ceph -s",
"cluster: id: f111402f-84d1-4e06-9fdb-c27607676e55 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds 3 daemons have recently crashed services: mon: 3 daemons, quorum b,c,a (age 15m) mgr: a(active, since 14m) mds: ocs-storagecluster-cephfilesystem:0 osd: 3 osds: 3 up (since 15m), 3 in (since 2h) data: pools: 3 pools, 96 pgs objects: 500 objects, 1.1 GiB usage: 5.5 GiB used, 295 GiB / 300 GiB avail pgs: 96 active+clean",
"noobaa status -n openshift-storage",
"oc delete pods <noobaa-operator> -n openshift-storage",
"oc delete pods <noobaa-core> -n openshift-storage",
"oc delete pods <noobaa-endpoint> -n openshift-storage",
"oc delete pods <noobaa-db> -n openshift-storage",
"oc delete pods <rgw-pod> -n openshift-storage",
"oc get pods <application-pod> -n <namespace> -o yaml | grep nodeName nodeName: node_name"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/restoring-the-monitor-pods-in-openshift-data-foundation_rhodf |
Transitioning to Containerized Services | Transitioning to Containerized Services Red Hat OpenStack Platform 17.0 A basic guide to working with OpenStack Platform containerized services OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/index |
Chapter 1. Kubernetes overview | Chapter 1. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 1.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 1.1. Kubernetes components Table 1.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 1.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 1.2. Kubernetes cluster overview Table 1.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 1.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 1.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/getting_started/kubernetes-overview |
Chapter 18. Configuring NTP Using the chrony Suite | Chapter 18. Configuring NTP Using the chrony Suite Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. In Linux systems, the NTP protocol is implemented by a daemon running in user space. The user space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter ( TSC ) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interruptions. There is a choice between the daemons ntpd and chronyd , available from the repositories in the ntp and chrony packages respectively. This chapter describes the use of the chrony suite. 18.1. Introduction to the chrony Suite Chrony is an implementation of the Network Time Protocol (NTP). You can use Chrony : to synchronize the system clock with NTP servers, to synchronize the system clock with a reference clock, for example a GPS receiver, to synchronize the system clock with a manual time input, as an NTPv4(RFC 5905) server or peer to provide a time service to other computers in the network. Chrony performs well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuously, or run on a virtual machine. Typical accuracy between two machines synchronized over the Internet is within a few milliseconds, and for machines on a LAN within tens of microseconds. Hardware timestamping or a hardware reference clock may improve accuracy between two machines synchronized to a sub-microsecond level. Chrony consists of chronyd , a daemon that runs in user space, and chronyc , a command line program which can be used to monitor the performance of chronyd and to change various operating parameters when it is running. 18.1.1. Differences Between ntpd and chronyd Things chronyd can do better than ntpd : chronyd can work well in an environment where access to the time reference is intermittent, whereas ntpd needs regular polling of time reference to work well. chronyd can perform well even when the network is congested for longer periods of time. chronyd can usually synchronize the clock faster and with better accuracy. chronyd quickly adapts to sudden changes in the rate of the clock, for example, due to changes in the temperature of the crystal oscillator, whereas ntpd may need a long time to settle down again. In the default configuration, chronyd never steps the time after the clock has been synchronized at system start, in order not to upset other running programs. ntpd can be configured to never step the time too, but it has to use a different means of adjusting the clock, which has some disadvantages including negative effect on accuracy of the clock. chronyd can adjust the rate of the clock on a Linux system in a larger range, which allows it to operate even on machines with a broken or unstable clock. For example, on some virtual machines. chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving. Things chronyd can do that ntpd cannot do: chronyd provides support for isolated networks where the only method of time correction is manual entry. For example, by the administrator looking at a clock. chronyd can examine the errors corrected at different updates to estimate the rate at which the computer gains or loses time, and use this estimate to adjust the computer clock subsequently. chronyd provides support to work out the rate of gain or loss of the real-time clock, for example the clock that maintains the time when the computer is turned off. It can use this data when the system boots to set the system time using an adapted value of time taken from the real-time clock. These real-time clock facilities are currently only available on Linux systems. chronyd supports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks. Things ntpd can do that chronyd cannot do: ntpd supports all operating modes from NTP version 4 ( RFC 5905 ), including broadcast, multicast and manycast clients and servers. Note that the broadcast and multicast modes are, even with authentication, inherently less accurate and less secure than the ordinary server and client mode, and should generally be avoided. ntpd supports the Autokey protocol ( RFC 5906 ) to authenticate servers with public-key cryptography. Note that the protocol has proven to be insecure and will be probably replaced with an implementation of the Network Time Security (NTS) specification. ntpd includes drivers for many reference clocks, whereas chronyd relies on other programs, for example gpsd , to access the data from the reference clocks using shared memory (SHM) or Unix domain socket (SOCK). 18.1.2. Choosing Between NTP Daemons Chrony should be preferred for all systems except for the systems that are managed or monitored by tools that do not support chrony, or the systems that have a hardware reference clock which cannot be used with chrony. Note Systems which are required to perform authentication of packets with the Autokey protocol, can only be used with ntpd , because chronyd does not support this protocol. The Autokey protocol has serious security issues, and thus using this protocol should be avoided. Instead of Autokey , use authentication with symmetric keys, which is supported by both chronyd and ntpd . Chrony supports stronger hash functions like SHA256 and SHA512, while ntpd can use only MD5 and SHA1. 18.2. Understanding chrony and Its Configuration 18.2.1. Understanding chronyd and chronyc The chrony daemon, chronyd , can be monitored and controlled by the command line utility chronyc . This utility provides a command prompt, which allows entering a number of commands to query the current state of chronyd and make changes to its configuration. By default, chronyd accepts only commands from a local instance of chronyc , but it can be configured to accept monitoring commands also from remote hosts. The remote access should be restricted. 18.2.2. Understanding the chrony Configuration Commands The default configuration file for chronyd is /etc/chrony.conf . The -f option can be used to specify an alternate configuration file path. See the chronyd man page for further options. Below is a selection of chronyd configuration options: Comments Comments should be preceded by #, %, ; or ! allow Optionally specify a host, subnet, or network from which to allow NTP connections to a machine acting as NTP server. The default is not to allow connections. Example 18.1. Granting access with the allow option: Use this this command to grant access to an IPv4: allow 192.0.2.0/24 Use this this command to grant access to an IPv6: allow 2001:0db8:85a3::8a2e:0370:7334 Note The UDP port number 123 needs to be open in the firewall in order to allow the client access: If you want to open port 123 permanently, use the --permanent option: cmdallow This is similar to the allow directive (see section allow ), except that it allows control access (rather than NTP client access) to a particular subnet or host. (By "control access" is meant that chronyc can be run on those hosts and successfully connect to chronyd on this computer.) The syntax is identical. There is also a cmddeny all directive with similar behavior to the cmdallow all directive. dumpdir Path to the directory to save the measurement history across restarts of chronyd (assuming no changes are made to the system clock behavior whilst it is not running). If this capability is to be used (via the dumponexit command in the configuration file, or the dump command in chronyc ), the dumpdir command should be used to define the directory where the measurement histories are saved. dumponexit If this command is present, it indicates that chronyd should save the measurement history for each of its time sources recorded whenever the program exits. (See the dumpdir command above). hwtimestamp The hwtimestamp directive enables hardware timestamping for extremely accurate synchronization. For more details, see chrony.conf(5) manual page. local The local keyword is used to allow chronyd to appear synchronized to real time from the viewpoint of clients polling it, even if it has no current synchronization source. This option is normally used on the "master" computer in an isolated network, where several computers are required to synchronize to one another, and the "master" is kept in line with real time by manual input. An example of the command is: A large value of 10 indicates that the clock is so many hops away from a reference clock that its time is unreliable. If the computer ever has access to another computer which is ultimately synchronized to a reference clock, it will almost certainly be at a stratum less than 10. Therefore, the choice of a high value like 10 for the local command prevents the machine's own time from ever being confused with real time, were it ever to leak out to clients that have visibility of real servers. log The log command indicates that certain information is to be logged. It accepts the following options: measurements This option logs the raw NTP measurements and related information to a file called measurements.log . statistics This option logs information about the regression processing to a file called statistics.log . tracking This option logs changes to the estimate of the system's gain or loss rate, and any slews made, to a file called tracking.log . rtc This option logs information about the system's real-time clock. refclocks This option logs the raw and filtered reference clock measurements to a file called refclocks.log . tempcomp This option logs the temperature measurements and system rate compensations to a file called tempcomp.log . The log files are written to the directory specified by the logdir command. An example of the command is: logdir This directive allows the directory where log files are written to be specified. An example of the use of this directive is: makestep Normally chronyd will cause the system to gradually correct any time offset, by slowing down or speeding up the clock as required. In certain situations, the system clock may be so far adrift that this slewing process would take a very long time to correct the system clock. This directive forces chronyd to step system clock if the adjustment is larger than a threshold value, but only if there were no more clock updates since chronyd was started than a specified limit (a negative value can be used to disable the limit). This is particularly useful when using reference clock, because the initstepslew directive only works with NTP sources. An example of the use of this directive is: This would step the system clock if the adjustment is larger than 1000 seconds, but only in the first ten clock updates. maxchange This directive sets the maximum allowed offset corrected on a clock update. The check is performed only after the specified number of updates to allow a large initial adjustment of the system clock. When an offset larger than the specified maximum occurs, it will be ignored for the specified number of times and then chronyd will give up and exit (a negative value can be used to never exit). In both cases a message is sent to syslog. An example of the use of this directive is: After the first clock update, chronyd will check the offset on every clock update, it will ignore two adjustments larger than 1000 seconds and exit on another one. maxupdateskew One of chronyd 's tasks is to work out how fast or slow the computer's clock runs relative to its reference sources. In addition, it computes an estimate of the error bounds around the estimated value. If the range of error is too large, it indicates that the measurements have not settled down yet, and that the estimated gain or loss rate is not very reliable. The maxupdateskew parameter is the threshold for determining whether an estimate is too unreliable to be used. By default, the threshold is 1000 ppm. The format of the syntax is: Typical values for skew-in-ppm might be 100 for a dial-up connection to servers over a telephone line, and 5 or 10 for a computer on a LAN. It should be noted that this is not the only means of protection against using unreliable estimates. At all times, chronyd keeps track of both the estimated gain or loss rate, and the error bound on the estimate. When a new estimate is generated following another measurement from one of the sources, a weighted combination algorithm is used to update the master estimate. So if chronyd has an existing highly-reliable master estimate and a new estimate is generated which has large error bounds, the existing master estimate will dominate in the new master estimate. minsources The minsources directive sets the minimum number of sources that need to be considered as selectable in the source selection algorithm before the local clock is updated. The format of the syntax is: By default, number-of-sources is 1. Setting minsources to a larger number can be used to improve the reliability, because multiple sources will need to correspond with each other. noclientlog This directive, which takes no arguments, specifies that client accesses are not to be logged. Normally they are logged, allowing statistics to be reported using the clients command in chronyc . reselectdist When chronyd selects synchronization source from available sources, it will prefer the one with minimum synchronization distance. However, to avoid frequent reselecting when there are sources with similar distance, a fixed distance is added to the distance for sources that are currently not selected. This can be set with the reselectdist option. By default, the distance is 100 microseconds. The format of the syntax is: stratumweight The stratumweight directive sets how much distance should be added per stratum to the synchronization distance when chronyd selects the synchronization source from available sources. The format of the syntax is: By default, dist-in-seconds is 1 millisecond. This means that sources with lower stratum are usually preferred to sources with higher stratum even when their distance is significantly worse. Setting stratumweight to 0 makes chronyd ignore stratum when selecting the source. rtcfile The rtcfile directive defines the name of the file in which chronyd can save parameters associated with tracking the accuracy of the system's real-time clock (RTC). The format of the syntax is: chronyd saves information in this file when it exits and when the writertc command is issued in chronyc . The information saved is the RTC's error at some epoch, that epoch (in seconds since January 1 1970), and the rate at which the RTC gains or loses time. Not all real-time clocks are supported as their code is system-specific. Note that if this directive is used then the real-time clock should not be manually adjusted as this would interfere with chrony 's need to measure the rate at which the real-time clock drifts if it was adjusted at random intervals. rtcsync The rtcsync directive is present in the /etc/chrony.conf file by default. This will inform the kernel the system clock is kept synchronized and the kernel will update the real-time clock every 11 minutes. 18.2.3. Security with chronyc Chronyc can access chronyd in two ways: Internet Protocol (IPv4 or IPv6), Unix domain socket, which is accessible locally by the root or chrony user. By default, chronyc connects to the Unix domain socket. The default path is /var/run/chrony/chronyd.sock . If this connection fails, which can happen for example when chronyc is running under a non-root user, chronyc tries to connect to 127.0.0.1 and then ::1. Only the following monitoring commands, which do not affect the behavior of chronyd , are allowed from the network: activity manual list rtcdata smoothing sources sourcestats tracking waitsync The set of hosts from which chronyd accepts these commands can be configured with the cmdallow directive in the configuration file of chronyd , or the cmdallow command in chronyc . By default, the commands are accepted only from localhost (127.0.0.1 or ::1). All other commands are allowed only through the Unix domain socket. When sent over the network, chronyd responds with a Not authorised error, even if it is from localhost. Accessing chronyd remotely with chronyc Allow access from both IPv4 and IPv6 addresses by adding the following to the /etc/chrony.conf file: or Allow commands from the remote IP address, network, or subnet by using the cmdallow directive. Add the following content to the /etc/chrony.conf file: Open port 323 in the firewall to connect from a remote system. If you want to open port 323 permanently, use the --permanent . Note that the allow directive is for NTP access whereas the cmdallow directive is to enable receiving of remote commands. It is possible to make these changes temporarily using chronyc running locally. Edit the configuration file to make permanent changes. 18.3. Using chrony 18.3.1. Installing chrony The chrony suite is installed by default on some versions of Red Hat Enterprise Linux 7. If required, to ensure that it is, run the following command as root : The default location for the chrony daemon is /usr/sbin/chronyd . The command line utility will be installed to /usr/bin/chronyc . 18.3.2. Checking the Status of chronyd To check the status of chronyd , issue the following command: 18.3.3. Starting chronyd To start chronyd , issue the following command as root : To ensure chronyd starts automatically at system start, issue the following command as root : 18.3.4. Stopping chronyd To stop chronyd , issue the following command as root : To prevent chronyd from starting automatically at system start, issue the following command as root : 18.3.5. Checking if chrony is Synchronized To check if chrony is synchronized, make use of the tracking , sources , and sourcestats commands. 18.3.5.1. Checking chrony Tracking To check chrony tracking, issue the following command: The fields are as follows: Reference ID This is the reference ID and name (or IP address) if available, of the server to which the computer is currently synchronized. Reference ID is a hexadecimal number to avoid confusion with IPv4 addresses. Stratum The stratum indicates how many hops away from a computer with an attached reference clock we are. Such a computer is a stratum-1 computer, so the computer in the example is two hops away (that is to say, a.b.c is a stratum-2 and is synchronized from a stratum-1). Ref time This is the time (UTC) at which the last measurement from the reference source was processed. System time In normal operation, chronyd never steps the system clock, because any jump in the timescale can have adverse consequences for certain application programs. Instead, any error in the system clock is corrected by slightly speeding up or slowing down the system clock until the error has been removed, and then returning to the system clock's normal speed. A consequence of this is that there will be a period when the system clock (as read by other programs using the gettimeofday() system call, or by the date command in the shell) will be different from chronyd 's estimate of the current true time (which it reports to NTP clients when it is operating in server mode). The value reported on this line is the difference due to this effect. Last offset This is the estimated local offset on the last clock update. RMS offset This is a long-term average of the offset value. Frequency The "frequency" is the rate by which the system's clock would be wrong if chronyd was not correcting it. It is expressed in ppm (parts per million). For example, a value of 1 ppm would mean that when the system's clock thinks it has advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time. Residual freq This shows the "residual frequency" for the currently selected reference source. This reflects any difference between what the measurements from the reference source indicate the frequency should be and the frequency currently being used. The reason this is not always zero is that a smoothing procedure is applied to the frequency. Each time a measurement from the reference source is obtained and a new residual frequency computed, the estimated accuracy of this residual is compared with the estimated accuracy (see skew ) of the existing frequency value. A weighted average is computed for the new frequency, with weights depending on these accuracies. If the measurements from the reference source follow a consistent trend, the residual will be driven to zero over time. Skew This is the estimated error bound on the frequency. Root delay This is the total of the network path delays to the stratum-1 computer from which the computer is ultimately synchronized. Root delay values are printed in nanosecond resolution. In certain extreme situations, this value can be negative. (This can arise in a symmetric peer arrangement where the computers' frequencies are not tracking each other and the network delay is very short relative to the turn-around time at each computer.) Root dispersion This is the total dispersion accumulated through all the computers back to the stratum-1 computer from which the computer is ultimately synchronized. Dispersion is due to system clock resolution, statistical measurement variations etc. Root dispersion values are printed in nanosecond resolution. Leap status This is the leap status, which can be Normal, Insert second, Delete second or Not synchronized. 18.3.5.2. Checking chrony Sources The sources command displays information about the current time sources that chronyd is accessing. The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns. The columns are as follows: M This indicates the mode of the source. ^ means a server, = means a peer and # indicates a locally connected reference clock. S This column indicates the state of the sources. "*" indicates the source to which chronyd is currently synchronized. "+" indicates acceptable sources which are combined with the selected source. "-" indicates acceptable sources which are excluded by the combining algorithm. "?" indicates sources to which connectivity has been lost or whose packets do not pass all tests. "x" indicates a clock which chronyd thinks is a falseticker (its time is inconsistent with a majority of other sources). "~" indicates a source whose time appears to have too much variability. The "?" condition is also shown at start-up, until at least 3 samples have been gathered from it. Name/IP address This shows the name or the IP address of the source, or reference ID for reference clock. Stratum This shows the stratum of the source, as reported in its most recently received sample. Stratum 1 indicates a computer with a locally attached reference clock. A computer that is synchronized to a stratum 1 computer is at stratum 2. A computer that is synchronized to a stratum 2 computer is at stratum 3, and so on. Poll This shows the rate at which the source is being polled, as a base-2 logarithm of the interval in seconds. Thus, a value of 6 would indicate that a measurement is being made every 64 seconds. chronyd automatically varies the polling rate in response to prevailing conditions. Reach This shows the source's reach register printed as an octal number. The register has 8 bits and is updated on every received or missed packet from the source. A value of 377 indicates that a valid reply was received for all of the last eight transmissions. LastRx This column shows how long ago the last sample was received from the source. This is normally in seconds. The letters m , h , d or y indicate minutes, hours, days or years. A value of 10 years indicates there were no samples received from this source yet. Last sample This column shows the offset between the local clock and the source at the last measurement. The number in the square brackets shows the actual measured offset. This may be suffixed by ns (indicating nanoseconds), us (indicating microseconds), ms (indicating milliseconds), or s (indicating seconds). The number to the left of the square brackets shows the original measurement, adjusted to allow for any slews applied to the local clock since. The number following the +/- indicator shows the margin of error in the measurement. Positive offsets indicate that the local clock is ahead of the source. 18.3.5.3. Checking chrony Source Statistics The sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd . The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns. The columns are as follows: Name/IP address This is the name or IP address of the NTP server (or peer) or reference ID of the reference clock to which the rest of the line relates. NP This is the number of sample points currently being retained for the server. The drift rate and current offset are estimated by performing a linear regression through these points. NR This is the number of runs of residuals having the same sign following the last regression. If this number starts to become too small relative to the number of samples, it indicates that a straight line is no longer a good fit to the data. If the number of runs is too low, chronyd discards older samples and re-runs the regression until the number of runs becomes acceptable. Span This is the interval between the oldest and newest samples. If no unit is shown the value is in seconds. In the example, the interval is 46 minutes. Frequency This is the estimated residual frequency for the server, in parts per million. In this case, the computer's clock is estimated to be running 1 part in 10 9 slow relative to the server. Freq Skew This is the estimated error bounds on Freq (again in parts per million). Offset This is the estimated offset of the source. Std Dev This is the estimated sample standard deviation. 18.3.6. Manually Adjusting the System Clock To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root : If the rtcfile directive is used, the real-time clock should not be manually adjusted. Random adjustments would interfere with chrony 's need to measure the rate at which the real-time clock drifts. 18.4. Setting Up chrony for Different Environments 18.4.1. Setting Up chrony for a System in an Isolated Network For a network that is never connected to the Internet, one computer is selected to be the master timeserver. The other computers are either direct clients of the master, or clients of clients. On the master, the drift file must be manually set with the average rate of drift of the system clock. If the master is rebooted, it will obtain the time from surrounding systems and calculate an average to set its system clock. Thereafter it resumes applying adjustments based on the drift file. The drift file will be updated automatically when the settime command is used. On the system selected to be the master, using a text editor running as root , edit the /etc/chrony.conf as follows: Where 192.0.2.0 is the network or subnet address from which the clients are allowed to connect. On the systems selected to be direct clients of the master, using a text editor running as root , edit the /etc/chrony.conf as follows: Where 192.0.2.123 is the address of the master, and master is the host name of the master. Clients with this configuration will resynchronize the master if it restarts. On the client systems which are not to be direct clients of the master, the /etc/chrony.conf file should be the same except that the local and allow directives should be omitted. In an Isolated Network, you can also use the local directive that enables a local reference mode, which allows chronyd operating as an NTP server to appear synchronized to real time, even when it was never synchronized or the last update of the clock happened a long time ago. To allow multiple servers in the network to use the same local configuration and to be synchronized to one another, without confusing clients that poll more than one server, use the orphan option of the local directive which enables the orphan mode. Each server needs to be configured to poll all other servers with local . This ensures that only the server with the smallest reference ID has the local reference active and other servers are synchronized to it. When the server fails, another one will take over. 18.5. Using chronyc 18.5.1. Using chronyc to Control chronyd To make changes to the local instance of chronyd using the command line utility chronyc in interactive mode, enter the following command as root : chronyc must run as root if some of the restricted commands are to be used. The chronyc command prompt will be displayed as follows: You can type help to list all of the commands. The utility can also be invoked in non-interactive command mode if called together with a command as follows: Note Changes made using chronyc are not permanent, they will be lost after a chronyd restart. For permanent changes, modify /etc/chrony.conf . 18.6. Chrony with HW timestamping 18.6.1. Understanding Hardware Timestamping Hardware timestamping is a feature supported in some Network Interface Controller (NICs) which provides accurate timestamping of incoming and outgoing packets. NTP timestamps are usually created by the kernel and chronyd with the use of the system clock. However, when HW timestamping is enabled, the NIC uses its own clock to generate the timestamps when packets are entering or leaving the link layer or the physical layer. When used with NTP , hardware timestamping can significantly improve the accuracy of synchronization. For best accuracy, both NTP servers and NTP clients need to use hardware timestamping. Under ideal conditions, a sub-microsecond accuracy may be possible. Another protocol for time synchronization that uses hardware timestamping is PTP . For further information about PTP , see Chapter 20, Configuring PTP Using ptp4l . Unlike NTP , PTP relies on assistance in network switches and routers. If you want to reach the best accuracy of synchronization, use PTP on networks that have switches and routers with PTP support, and prefer NTP on networks that do not have such switches and routers. 18.6.2. Verifying Support for Hardware Timestamping To verify that hardware timestamping with NTP is supported by an interface, use the ethtool -T command. An interface can be used for hardware timestamping with NTP if ethtool lists the SOF_TIMESTAMPING_TX_HARDWARE and SOF_TIMESTAMPING_TX_SOFTWARE capabilities and also the HWTSTAMP_FILTER_ALL filter mode. Example 18.2. Verifying Support for Hardware Timestamping on a Specific Interface Output: 18.6.3. Enabling Hardware Timestamping To enable hardware timestamping, use the hwtimestamp directive in the /etc/chrony.conf file. The directive can either specify a single interface, or a wildcard character ( ) can be used to enable hardware timestamping on all interfaces that support it. Use the wildcard specification in case that no other application, like [application]*ptp4l from the linuxptp package, is using hardware timestamping on an interface. Multiple hwtimestamp directives are allowed in the chrony configuration file. Example 18.3. Enabling Hardware Timestamping by Using the hwtimestamp Directive 18.6.4. Configuring Client Polling Interval The default range of a polling interval (64-1024 seconds) is recommended for servers on the Internet. For local servers and hardware timestamping, a shorter polling interval needs to be configured in order to minimize offset of the system clock. The following directive in /etc/chrony.conf specifies a local NTP server using one second polling interval: 18.6.5. Enabling Interleaved Mode NTP servers that are not hardware NTP appliances, but rather general purpose computers running a software NTP implementation, like chrony , will get a hardware transmit timestamp only after sending a packet. This behavior prevents the server from saving the timestamp in the packet to which it corresponds. In order to enable NTP clients receiving transmit timestamps that were generated after the transmission, configure the clients to use the NTP interleaved mode by adding the xleave option to the server directive in /etc/chrony.conf : 18.6.6. Configuring Server for Large Number of Clients The default server configuration allows a few thousands of clients at most to use the interleaved mode concurrently. To configure the server for a larger number of clients, increase the clientloglimit directive in /etc/chrony.conf . This directive specifies the maximum size of memory allocated for logging of clients' access on the server: 18.6.7. Verifying Hardware Timestamping To verify that the interface has successfully enabled hardware timestamping, check the system log. The log should contain a message from chronyd for each interface with successfully enabled hardware timestamping. Example 18.4. Log Messages for Interfaces with Enabled Hardware Timestamping When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command: Example 18.5. Reporting the Transmit, Receive Timestamping and Interleaved Mode for Each NTP Source Output: Example 18.6. Reporting the Stability of NTP Measurements With hardware timestamping enabled, stability of NTP measurements should be in tens or hundreds of nanoseconds, under normal load. This stability is reported in the Std Dev column of the output of the chronyc sourcestats command: Output: 18.6.8. Configuring PTP-NTP bridge If a highly accurate Precision Time Protocol ( PTP ) grandmaster is available in a network that does not have switches or routers with PTP support, a computer may be dedicated to operate as a PTP slave and a stratum-1 NTP server. Such a computer needs to have two or more network interfaces, and be close to the grandmaster or have a direct connection to it. This will ensure highly accurate synchronization in the network. Configure the ptp4l and phc2sys programs from the linuxptp packages to use one interface to synchronize the system clock using PTP . The configuration is described in the Chapter 20, Configuring PTP Using ptp4l . Configure chronyd to provide the system time using the other interface: Example 18.7. Configuring chronyd to Provide the System Time Using the Other Interface 18.7. Additional Resources The following sources of information provide additional resources regarding chrony . 18.7.1. Installed Documentation chronyc(1) man page - Describes the chronyc command-line interface tool including commands and command options. chronyd(8) man page - Describes the chronyd daemon including commands and command options. chrony.conf(5) man page - Describes the chrony configuration file. 18.7.2. Online Documentation http://chrony.tuxfamily.org/doc/3.1/chronyc.html http://chrony.tuxfamily.org/doc/3.1/chronyd.html http://chrony.tuxfamily.org/doc/3.1/chrony.conf.html For answers to FAQs, see http://chrony.tuxfamily.org/faq.html | [
"allow 192.0.2.0/24",
"allow 2001:0db8:85a3::8a2e:0370:7334",
"~]# firewall-cmd --zone=public --add-port=123/udp",
"~]# firewall-cmd --permanent --zone=public --add-port=123/udp",
"local stratum 10",
"log measurements statistics tracking",
"logdir /var/log/chrony",
"makestep 1000 10",
"maxchange 1000 1 2",
"maxupdateskew skew-in-ppm",
"minsources number-of-sources",
"reselectdist dist-in-seconds",
"stratumweight dist-in-seconds",
"rtcfile /var/lib/chrony/rtc",
"bindcmdaddress 0.0.0.0",
"bindcmdaddress :",
"cmdallow 192.168.1.0/24",
"~]# firewall-cmd --zone=public --add-port=323/udp",
"~]# firewall-cmd --permanent --zone=public --add-port=323/udp",
"~]# yum install chrony",
"~]USD systemctl status chronyd chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled) Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago",
"~]# systemctl start chronyd",
"~]# systemctl enable chronyd",
"~]# systemctl stop chronyd",
"~]# systemctl disable chronyd",
"~]USD chronyc tracking Reference ID : CB00710F (foo.example.net) Stratum : 3 Ref time (UTC) : Fri Jan 27 09:49:17 2017 System time : 0.000006523 seconds slow of NTP time Last offset : -0.000006747 seconds RMS offset : 0.000035822 seconds Frequency : 3.225 ppm slow Residual freq : 0.000 ppm Skew : 0.129 ppm Root delay : 0.013639022 seconds Root dispersion : 0.001100737 seconds Update interval : 64.2 seconds Leap status : Normal",
"~]USD chronyc sources 210 Number of sources = 3 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== #* GPS0 0 4 377 11 -479ns[ -621ns] +/- 134ns ^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms ^+ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms",
"~]USD chronyc sourcestats 210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev =============================================================================== abc.def.ghi 11 5 46m -0.001 0.045 1us 25us",
"~]# chronyc makestep",
"driftfile /var/lib/chrony/drift commandkey 1 keyfile /etc/chrony.keys initstepslew 10 client1 client3 client6 local stratum 8 manual allow 192.0.2.0",
"server master driftfile /var/lib/chrony/drift logdir /var/log/chrony log measurements statistics tracking keyfile /etc/chrony.keys commandkey 24 local stratum 10 initstepslew 20 master allow 192.0.2.123",
"~]# chronyc",
"chronyc>",
"chronyc command",
"~]# ethtool -T eth0",
"Timestamping parameters for eth0: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) all (HWTSTAMP_FILTER_ALL) ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC) ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC) ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT) ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC) ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)",
"hwtimestamp eth0 hwtimestamp eth1 hwtimestamp *",
"server ntp.local minpoll 0 maxpoll 0",
"server ntp.local minpoll 0 maxpoll 0 xleave",
"clientloglimit 100000000",
"chronyd[4081]: Enabled HW timestamping on eth0 chronyd[4081]: Enabled HW timestamping on eth1",
"~]# chronyc ntpdata",
"Remote address : 203.0.113.15 (CB00710F) Remote port : 123 Local address : 203.0.113.74 (CB00714A) Leap status : Normal Version : 4 Mode : Server Stratum : 1 Poll interval : 0 (1 seconds) Precision : -24 (0.000000060 seconds) Root delay : 0.000015 seconds Root dispersion : 0.000015 seconds Reference ID : 47505300 (GPS) Reference time : Wed May 03 13:47:45 2017 Offset : -0.000000134 seconds Peer delay : 0.000005396 seconds Peer dispersion : 0.000002329 seconds Response time : 0.000152073 seconds Jitter asymmetry: +0.00 NTP tests : 111 111 1111 Interleaved : Yes Authenticated : No TX timestamping : Hardware RX timestamping : Hardware Total TX : 27 Total RX : 27 Total valid RX : 27",
"chronyc sourcestats",
"210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ntp.local 12 7 11 +0.000 0.019 +0ns 49ns",
"bindaddress 203.0.113.74 hwtimestamp eth1 local stratum 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Configuring_NTP_Using_the_chrony_Suite |
Chapter 9. Installing a cluster on AWS into a government region | Chapter 9. Installing a cluster on AWS into a government region In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 9.2. AWS government regions OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region. The following AWS GovCloud partitions are supported: us-gov-east-1 us-gov-west-1 9.3. Installation requirements Before you can install the cluster, you must: Provide an existing private AWS VPC and subnets to host the cluster. Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. Manually create the installation configuration file ( install-config.yaml ). 9.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 9.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 9.5. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 9.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 9.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 9.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 9.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.5.5. AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 9.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.8. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace worker nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS region. When creating the installation configuration file, ensure that you select the same AWS region that you specified when configuring your subscription. 9.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.10. Manually creating the installation configuration file Installing the cluster requires that you manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 9.10.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.10.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 9.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 9.10.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 9.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 9.10.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.10.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.10.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 9.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.12. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Incorporating the Cloud Credential Operator utility manifests . 9.12.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 9.12.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 9.12.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 9.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 9.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 9.12.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 9.12.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 9.12.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 9.12.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 9.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 9.15. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 9.17. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-government-region |
Chapter 18. Managing cluster resources | Chapter 18. Managing cluster resources There are a variety of commands you can use to display, modify, and administer cluster resources. 18.1. Displaying configured resources To display a list of all configured resources, use the following command. For example, if your system is configured with a resource named VirtualIP and a resource named WebSite , the pcs resource status command yields the following output. To display the configured parameters for a resource, use the following command. For example, the following command displays the currently configured parameters for resource VirtualIP . As of RHEL 8.5, to display the status of an individual resource, use the following command. For example, if your system is configured with a resource named VirtualIP the pcs resource status VirtualIP command yields the following output. As of RHEL 8.5, to display the status of the resources running on a specific node, use the following command. You can use this command to display the status of resources on both cluster and remote nodes. For example, if node-01 is running resources named VirtualIP and WebSite the pcs resource status node=node-01 command might yield the following output. 18.2. Exporting cluster resources as pcs commands As of Red Hat Enterprise Linux 8.7, you can display the pcs commands that can be used to re-create configured cluster resources on a different system using the --output-format=cmd option of the pcs resource config command. The following commands create four resources created for an active/passive Apache HTTP server in a Red Hat high availability cluster: an LVM-activate resource, a Filesystem resource, an IPaddr2 resource, and an Apache resource. After you create the resources, the following command displays the pcs commands you can use to re-create those resources on a different system. To display the pcs command or commands you can use to re-create only one configured resource, specify the resource ID for that resource. 18.3. Modifying resource parameters To modify the parameters of a configured resource, use the following command. The following sequence of commands show the initial values of the configured parameters for resource VirtualIP , the command to change the value of the ip parameter, and the values following the update command. Note When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. 18.4. Clearing failure status of cluster resources If a resource has failed, a failure message appears when you display the cluster status with the pcs status command. After attempting to resolve the cause of the failure, you can check the updated status of the resource by running the pcs status command again, and you can check the failure count for the cluster resources with the pcs resource failcount show --full command. You can clear that failure status of a resource with the pcs resource cleanup command. The pcs resource cleanup command resets the resource status and failcount value for the resource. This command also removes the operation history for the resource and re-detects its current state. The following command resets the resource status and failcount value for the resource specified by resource_id . If you do not specify resource_id , the pcs resource cleanup command resets the resource status and failcount value for all resources with a failure count. In addition to the pcs resource cleanup resource_id command, you can also reset the resource status and clear the operation history of a resource with the pcs resource refresh resource_id command. As with the pcs resource cleanup command, you can run the pcs resource refresh command with no options specified to reset the resource status and failcount value for all resources. Both the pcs resource cleanup and the pcs resource refresh commands clear the operation history for a resource and re-detect the current state of the resource. The pcs resource cleanup command operates only on resources with failed actions as shown in the cluster status, while the pcs resource refresh command operates on resources regardless of their current state. 18.5. Moving resources in a cluster Pacemaker provides a variety of mechanisms for configuring a resource to move from one node to another and to manually move a resource when needed. You can manually move resources in a cluster with the pcs resource move and pcs resource relocate commands, as described in Manually moving cluster resources . In addition to these commands, you can also control the behavior of cluster resources by enabling, disabling, and banning resources, as described in Disabling, enabling, and banning cluster resources . You can configure a resource so that it will move to a new node after a defined number of failures, and you can configure a cluster to move resources when external connectivity is lost. 18.5.1. Moving resources due to failure When you create a resource, you can configure the resource so that it will move to a new node after a defined number of failures by setting the migration-threshold option for that resource. Once the threshold has been reached, this node will no longer be allowed to run the failed resource until: The resource's failure-timeout value is reached. The administrator manually resets the resource's failure count by using the pcs resource cleanup command. The value of migration-threshold is set to INFINITY by default. INFINITY is defined internally as a very large but finite number. A value of 0 disables the migration-threshold feature. Note Setting a migration-threshold for a resource is not the same as configuring a resource for migration, in which the resource moves to another location without loss of state. The following example adds a migration threshold of 10 to the resource named dummy_resource , which indicates that the resource will move to a new node after 10 failures. You can add a migration threshold to the defaults for the whole cluster with the following command. To determine the resource's current failure status and limits, use the pcs resource failcount show command. There are two exceptions to the migration threshold concept; they occur when a resource either fails to start or fails to stop. If the cluster property start-failure-is-fatal is set to true (which is the default), start failures cause the failcount to be set to INFINITY and always cause the resource to move immediately. Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then the cluster will fence the node to be able to start the resource elsewhere. If STONITH is not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after the failure timeout. 18.5.2. Moving resources due to connectivity changes Setting up the cluster to move resources when external connectivity is lost is a two step process. Add a ping resource to the cluster. The ping resource uses the system utility of the same name to test if a list of machines (specified by DNS host name or IPv4/IPv6 address) are reachable and uses the results to maintain a node attribute called pingd . Configure a location constraint for the resource that will move the resource to a different node when connectivity is lost. The following table describes the properties you can set for a ping resource. Table 18.1. Properties of a ping resources Field Description dampen The time to wait (dampening) for further changes to occur. This prevents a resource from bouncing around the cluster when cluster nodes notice the loss of connectivity at slightly different times. multiplier The number of connected ping nodes gets multiplied by this value to get a score. Useful when there are multiple ping nodes configured. host_list The machines to contact to determine the current connectivity status. Allowed values include resolvable DNS host names, IPv4 and IPv6 addresses. The entries in the host list are space separated. The following example command creates a ping resource that verifies connectivity to gateway.example.com . In practice, you would verify connectivity to your network gateway/router. You configure the ping resource as a clone so that the resource will run on all cluster nodes. The following example configures a location constraint rule for the existing resource named Webserver . This will cause the Webserver resource to move to a host that is able to ping gateway.example.com if the host that it is currently running on cannot ping gateway.example.com . 18.6. Disabling a monitor operation The easiest way to stop a recurring monitor is to delete it. However, there can be times when you only want to disable it temporarily. In such cases, add enabled="false" to the operation's definition. When you want to reinstate the monitoring operation, set enabled="true" to the operation's definition. When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. For example, if you have configured a monitoring operation with a custom timeout value of 600, running the following commands will reset the timeout value to the default value of 20 (or whatever you have set the default value to with the pcs resource op defaults command). In order to maintain the original value of 600 for this option, when you reinstate the monitoring operation you must specify that value, as in the following example. 18.7. Configuring and managing cluster resource tags As of Red Hat Enterprise Linux 8.3, you can use the pcs command to tag cluster resources. This allows you to enable, disable, manage, or unmanage a specified set of resources with a single command. 18.7.1. Tagging cluster resources for administration by category The following procedure tags two resources with a resource tag and disables the tagged resources. In this example, the existing resources to be tagged are named d-01 and d-02 . Procedure Create a tag named special-resources for resources d-01 and d-02 . Display the resource tag configuration. Disable all resources that are tagged with the special-resources tag. Display the status of the resources to confirm that resources d-01 and d-02 are disabled. In addition to the pcs resource disable command, the pcs resource enable , pcs resource manage , and pcs resource unmanage commands support the administration of tagged resources. After you have created a resource tag: You can delete a resource tag with the pcs tag delete command. You can modify resource tag configuration for an existing resource tag with the pcs tag update command. 18.7.2. Deleting a tagged cluster resource You cannot delete a tagged cluster resource with the pcs command. To delete a tagged resource, use the following procedure. Procedure Remove the resource tag. The following command removes the resource tag special-resources from all resources with that tag, The following command removes the resource tag special-resources from the resource d-01 only. Delete the resource. | [
"pcs resource status",
"pcs resource status VirtualIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started",
"pcs resource config resource_id",
"pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s",
"pcs resource status resource_id",
"pcs resource status VirtualIP VirtualIP (ocf::heartbeat:IPaddr2): Started",
"pcs resource status node= node_id",
"pcs resource status node=node-01 VirtualIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started",
"pcs resource create my_lvm ocf:heartbeat:LVM-activate vgname=my_vg vg_access_mode=system_id --group apachegroup pcs resource create my_fs Filesystem device=\"/dev/my_vg/my_lv\" directory=\"/var/www\" fstype=\"xfs\" --group apachegroup pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup pcs resource create Website apache configfile=\"/etc/httpd/conf/httpd.conf\" statusurl=\"http://127.0.0.1/server-status\" --group apachegroup",
"pcs resource config --output-format=cmd pcs resource create --no-default-ops --force -- my_lvm ocf:heartbeat:LVM-activate vg_access_mode=system_id vgname=my_vg op monitor interval=30s id=my_lvm-monitor-interval-30s timeout=90s start interval=0s id=my_lvm-start-interval-0s timeout=90s stop interval=0s id=my_lvm-stop-interval-0s timeout=90s; pcs resource create --no-default-ops --force -- my_fs ocf:heartbeat:Filesystem device=/dev/my_vg/my_lv directory=/var/www fstype=xfs op monitor interval=20s id=my_fs-monitor-interval-20s timeout=40s start interval=0s id=my_fs-start-interval-0s timeout=60s stop interval=0s id=my_fs-stop-interval-0s timeout=60s; pcs resource create --no-default-ops --force -- VirtualIP ocf:heartbeat:IPaddr2 cidr_netmask=24 ip=198.51.100.3 op monitor interval=10s id=VirtualIP-monitor-interval-10s timeout=20s start interval=0s id=VirtualIP-start-interval-0s timeout=20s stop interval=0s id=VirtualIP-stop-interval-0s timeout=20s; pcs resource create --no-default-ops --force -- Website ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl=http://127.0.0.1/server-status op monitor interval=10s id=Website-monitor-interval-10s timeout=20s start interval=0s id=Website-start-interval-0s timeout=40s stop interval=0s id=Website-stop-interval-0s timeout=60s; pcs resource group add apachegroup my_lvm my_fs VirtualIP Website",
"pcs resource config VirtualIP --output-format=cmd pcs resource create --no-default-ops --force -- VirtualIP ocf:heartbeat:IPaddr2 cidr_netmask=24 ip=198.51.100.3 op monitor interval=10s id=VirtualIP-monitor-interval-10s timeout=20s start interval=0s id=VirtualIP-start-interval-0s timeout=20s stop interval=0s id=VirtualIP-stop-interval-0s timeout=20s",
"pcs resource update resource_id [ resource_options ]",
"pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s pcs resource update VirtualIP ip=192.169.0.120 pcs resource config VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.169.0.120 cidr_netmask=24 Operations: monitor interval=30s",
"pcs resource cleanup resource_id",
"pcs resource meta dummy_resource migration-threshold=10",
"pcs resource defaults update migration-threshold=10",
"pcs resource create ping ocf:pacemaker:ping dampen=5s multiplier=1000 host_list=gateway.example.com clone",
"pcs constraint location Webserver rule score=-INFINITY pingd lt 1 or not_defined pingd",
"pcs resource update resourceXZY op monitor enabled=false pcs resource update resourceXZY op monitor enabled=true",
"pcs resource update resourceXZY op monitor timeout=600 enabled=true",
"pcs tag create special-resources d-01 d-02",
"pcs tag config special-resources d-01 d-02",
"pcs resource disable special-resources",
"pcs resource * d-01 (ocf::pacemaker:Dummy): Stopped (disabled) * d-02 (ocf::pacemaker:Dummy): Stopped (disabled)",
"pcs tag remove special-resources pcs tag No tags defined",
"pcs tag update special-resources remove d-01",
"pcs resource delete d-01 Attempting to stop: d-01... Stopped"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_managing-cluster-resources-configuring-and-managing-high-availability-clusters |
Configuring GitHub Actions | Configuring GitHub Actions Red Hat Trusted Application Pipeline 1.4 Learn how to configure GitHub Actions for secure CI/CD workflows. Red Hat Trusted Application Pipeline Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/configuring_github_actions/index |
23.8. Memory Allocation | 23.8. Memory Allocation In cases where the guest virtual machine crashes, the optional attribute dumpCore can be used to control whether the guest virtual machine's memory should be included in the generated core dump( dumpCore='on' ) or not included ( dumpCore='off' ). Note that the default setting is on , so unless the parameter is set to off , the guest virtual machine memory will be included in the core dumpfile. The <maxMemory> element determines maximum run-time memory allocation of the guest. The slots attribute specifies the number of slots available for adding memory to the guest. The <memory> element specifies the maximum allocation of memory for the guest at boot time. This can also be set using the NUMA cell size configuration, and can be increased by hot-plugging of memory to the limit specified by maxMemory . The <currentMemory> element determines the actual memory allocation for a guest virtual machine. This value can be less than the maximum allocation (set by <memory> ) to allow for the guest virtual machine memory to balloon as needed. If omitted, this defaults to the same value as the <memory> element. The unit attribute behaves the same as for memory. <domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> ... </domain> Figure 23.10. Memory unit | [
"<domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> </domain>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Memory_allocation |
8.153. numactl | 8.153. numactl 8.153.1. RHBA-2014:1483 - numactl bug fix and enhancement update Updated numactl packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The numactl packages add simple Non-Uniform Memory Access (NUMA) policy support. It consists of the numactl program to run other programs with a specific NUMA policy, and the libnuma library to perform allocations with NUMA policy in applications. This update also fixes the following bugs: Note The numactl packages have been upgraded to upstream version 2.0.9, which provides a number of bug fixes and enhancements over the version. (BZ# 1017048 ) This update also fixes the following bugs: Bug Fixes BZ# 812462 Prior to this update, the numa_parse_cpustring() function added an unallowed CPU into its bitmask. As a consequence, only the bits the user has access to were set. Consequently, every time the function was used led to different outcomes. With this update, the numa_parse_cpustring() code sets all bits in the "cpustring" argument regardless of current task's CPU mask, and the aforementioned scenario no longer occurs. BZ# 819133 Previously, the compiler was enforcing libnuma to provide a constant within the "char*" parameter, which led to the following warning message being returned: testconst.c:10:45: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings] The underlying source code has been fixed so that the string is handled as a constant, and the user no longer receives warning messages. BZ# 873456 Previously, when the user set the affinity of the shell to be a subset of available CPUs and then attempted to use the numactl utility to bind to something absent from that affinity mask, the attempt failed. An upstream patch has been applied to fix this bug, and the numactl environment has been extended so that the user can choose whether to allow for the affinity mask to determine available CPUs or not. BZ# 1100134 Due to incompatibilities emerging after the latest numactl packages update, virsh processes terminated unexpectedly when any virsh command was run. This bug has been fixed, and the virsh commands now work correctly. The numactl packages have been upgraded to upstream version 2.0.9, which provides a number of bug fixes and enhancements over the version. (BZ#1017048) Users of numactl are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/numactl |
2.11.5. Displaying Parameters of Control Groups | 2.11.5. Displaying Parameters of Control Groups To display the parameters of specific cgroups, run: where parameter is a pseudofile that contains values for a subsystem, and list_of_cgroups is a list of cgroups separated with spaces. For example: displays the values of cpuset.cpus and memory.limit_in_bytes for cgroups group1 and group2 . If you do not know the names of the parameters themselves, use a command like: | [
"~]USD cgget -r parameter list_of_cgroups",
"~]USD cgget -r cpuset.cpus -r memory.limit_in_bytes group1 group2",
"~]USD cgget -g cpuset /"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/displaying_parameters_of_control_groups |
Chapter 57. Kubernetes | Chapter 57. Kubernetes Since Camel 2.17 The Kubernetes components integrate your application with Kubernetes standalone or on top of Openshift. 57.1. Kubernetes components See the following for usage of each component: Kubernetes ConfigMap Perform operations on Kubernetes ConfigMaps and get notified on ConfigMaps changes. Kubernetes Custom Resources Perform operations on Kubernetes Custom Resources and get notified on Deployment changes. Kubernetes Deployments Perform operations on Kubernetes Deployments and get notified on Deployment changes. Kubernetes Event Perform operations on Kubernetes Events and get notified on Events changes. Kubernetes HPA Perform operations on Kubernetes Horizontal Pod Autoscalers (HPA) and get notified on HPA changes. Kubernetes Job Perform operations on Kubernetes Jobs. Kubernetes Namespaces Perform operations on Kubernetes Namespaces and get notified on Namespace changes. Kubernetes Nodes Perform operations on Kubernetes Nodes and get notified on Node changes. Kubernetes Persistent Volume Perform operations on Kubernetes Persistent Volumes and get notified on Persistent Volume changes. Kubernetes Persistent Volume Claim Perform operations on Kubernetes Persistent Volumes Claims and get notified on Persistent Volumes Claim changes. Kubernetes Pods Perform operations on Kubernetes Pods and get notified on Pod changes. Kubernetes Replication Controller Perform operations on Kubernetes Replication Controllers and get notified on Replication Controllers changes. Kubernetes Resources Quota Perform operations on Kubernetes Resources Quotas. Kubernetes Secrets Perform operations on Kubernetes Secrets. Kubernetes Service Account Perform operations on Kubernetes Service Accounts. Kubernetes Services Perform operations on Kubernetes Services and get notified on Service changes. Openshift Build Config Perform operations on OpenShift Build Configs. Openshift Builds Perform operations on OpenShift Builds. Openshift Deployment Configs Perform operations on Openshift Deployment Configs and get notified on Deployment Config changes. 57.2. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 57.3. Usage 57.3.1. Producer examples Here we show some examples of producer using camel-kubernetes. Create a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=createPod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_SPEC header you can specify your PodSpec and pass it to this operation. Delete a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=deletePod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_NAME header you can specify your Pod name and pass it to this operation. 57.4. Using Kubernetes ConfigMaps and Secrets The camel-kubernetes component also provides functions that loads the property values from Kubernetes`ConfigMaps` or Secrets . For more information see . 57.5. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=createPod\", host, authToken);",
"from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=deletePod\", host, authToken);"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-component-starter |
Chapter 8. Creating the mortgage-process project | Chapter 8. Creating the mortgage-process project A project is a container for assets such as data objects, business processes, guided rules, decision tables, and forms. The project that you are creating is similar to the existing Mortgage_Process sample project in Business Central. Procedure In Business Central, go to Menu Design Projects . Red Hat Decision Manager provides a default space called MySpace , as shown in the following image. You can use the default space to create and test example projects. Figure 8.1. Default space Click Add Project . Enter mortgage-process in the Name field. Click Configure Advanced Options and modify the GAV fields with the following values: Group ID : com.myspace Artifact ID : mortgage-process Version : 1.0.0 Click Add . The Assets view of the project opens. 8.1. Modifying the Mortgages sample project The Mortgages sample project consists of predefined data objects, guided decision tables, guided rules, forms, and a business process. Using the sample project provides a quick way to get acclimated with Red Hat Decision Manager. In a real business scenario, you would create all of the assets by providing data that is specific to your business requirements. Navigate to the Mortgages sample project to view the predefined assets. Procedure In Business Central, go to Menu Design Projects . In the upper-right corner of the screen, click the arrow to Add Project and select Try Samples . Select Mortgages and click Ok . The Assets view of the project opens. Click an asset that you want to modify. All of the assets can be edited to meet your project requirements. 8.2. Creating a project using archetypes Archetypes are projects that are installed in Apache Maven repositories and contain a specific template structure. You can also generate parameterized versions of the project templates using archetypes. When you use an archetype to create a project, it is added to the Git repository that is connected to your Red Hat Decision Manager installation. Prerequisites You have created an archetype and added it to the Archetypes page in the Business Central Settings . For information about creating archetypes, see the Guide to Creating Archetypes . You have set a default archetype in your space in Business Central. For more information about archetypes management, see Configuring Business Central settings and properties . Procedure In Business Central, go to Menu Design Projects . Select or create the space into which you want to add a new project from an archetype template. Click Add Project . Type the project name and description in the Name and Description fields. Click Configure Advanced Options . Select the Based on template checkbox. Select the archetype from drop-down options if required. The default archetype is selected that is already set in the space. Click Add . The Assets view of the project opens based on the selected archetype template. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/new-project-proc_managing-projects |
Chapter 3. Defining Camel routes | Chapter 3. Defining Camel routes Red Hat build of Apache Camel for Quarkus supports the Java DSL language to define Camel Routes. 3.1. Java DSL Extending org.apache.camel.builder.RouteBuilder and using the fluent builder methods available there is the most common way of defining Camel Routes. Here is a simple example of a route using the timer component: import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:foo?period=1000") .log("Hello World"); } } 3.1.1. Endpoint DSL Since Camel 3.0, you can use fluent builders also for defining Camel endpoints. The following is equivalent with the example: import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer("foo").period(1000)) .log("Hello World"); } } Note Builder methods for all Camel components are available via camel-quarkus-core , but you still need to add the given component's extension as a dependency for the route to work properly. In case of the above example, it would be camel-quarkus-timer . | [
"import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?period=1000\") .log(\"Hello World\"); } }",
"import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer(\"foo\").period(1000)) .log(\"Hello World\"); } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-routes |
9.10. Setting up SASL Identity Mapping | 9.10. Setting up SASL Identity Mapping Simple Authentication and Security Layer (SASL) is an abstraction layer between protocols like LDAP and authentication methods like GSS-API which allows any protocol which can interact with SASL to utilize any authentication mechanism which can work with SASL. Simply put, SASL is an intermediary that makes authenticating to applications using different mechanisms easier. SASL can also be used to establish an encrypted session between a client and server. The SASL framework allows different mechanisms to be used to authenticate a user to the server, depending on what mechanism is enabled in both client and server applications. SASL also creates a layer for encrypted (secure) sessions. Using GSS-API, Directory Server utilizes Kerberos tickets to authenticate sessions and encrypt data. 9.10.1. About SASL Identity Mapping When processing a SASL bind request, the server matches, or maps, the SASL authentication ID used to authenticate to the Directory Server with an LDAP entry stored within the server. When using Kerberos, the SASL user ID usually has the format userid@REALM , such as [email protected] . This ID must be converted into the DN of the user's Directory Server entry, such as uid=scarter,ou=people,dc=example,dc=com . If the authentication ID clearly corresponds to the LDAP entry for a person, it is possible to configure the Directory Server to map the authentication ID automatically to the entry DN. Directory Server has some pre-configured default mappings which handle most common configurations, and customized maps can be created. By default, during a bind attempt, only the first matching mapping rule is applied if SASL mapping fallback is not enabled. For further details about SASL mapping fallback, see Section 9.10.4, "Enabling SASL Mapping Fallback" . Be sure to configure SASL maps so that only one mapping rule matches the authentication string. SASL mappings are configured by entries under a container entry: SASL identity mapping entries are children of this entry: Mapping entries are defined by the following attributes: nsSaslMapRegexString : The regular expression which is used to map the elements of the supplied authid . nsSaslMapFilterTemplate : A template which applies the elements of the nsSaslMapRegexString to create the DN. nsSaslMapBaseDNTemplate : Provides the search base or a specific entry DN to match against the constructed DN. Optional: nsSaslMapPriority : Sets the priority of this SASL mapping. The priority value is used, if nsslapd-sasl-mapping-fallback is enabled in cn=config . For details, see Section 9.10.4.1, "Setting SASL Mapping Priorities" . For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . For example: The nsSaslMapRegexString attribute sets variables of the form \1 , \2 , \3 for bind IDs which are filled into the template attributes during a search. This example sets up a SASL identity mapping for any user in the ou=People,dc=example,dc=com subtree who belongs to the inetOrgPerson object class. When a Directory Server receives a SASL bind request with [email protected] as the user ID ( authid ), the regular expression fills in the base DN template with uid=mconnors,ou=people,dc=EXAMPLE,dc=COM as the user ID, and authentication proceeds from there. Note The dc values are not case sensitive, so dc=EXAMPLE and dc=example are equivalent. The Directory Server can also use a more inclusive mapping scheme, such as the following: This matches any user ID and map it an entry under the ou=People,dc=example,dc=com subtree which meets the filter cn= userId . Mappings can be confined to a single realm by specifying the realm in the nsSaslMapRegexString attribute. For example: This mapping is identical to the mapping, except that it only applies to users authenticating from the US.EXAMPLE.COM realm. (Realms are described in Section 9.11.2.1, "About Principals and Realms" .) When a server connects to another server, such as during replication or with chaining, the default mappings for the will not properly map the identities. This is because the principal (SASL identity) for one server does not match the principal on the server where authentication is taking place, so it does not match the mapping entries. To allow server to server authentication using SASL, create a mapping for the specific server principal to a specific user entry. For example, this mapping matches the ldap1.example.com server to the cn=replication manager,cn=config entry. The mapping entry itself is created on the second server, such as ldap2.example.com . Sometimes, the realm name is not included in the principal name in SASL GSS-API configuration. A second mapping can be created which is identical to the first, only without specifying the realm in the principal name. For example: Because the realm is not specified, the second mapping is more general (meaning, it has the potential to match more entries than the first. The best practice is to have more specific mappings processed first and gradually progress through more general mappings. If a priority is not set for a SASL mapping using the nsSaslMapPriority parameter, there is no way to specify the order that mappings are processed. However, there is a way to control how SASL mappings are processed: the name. The Directory Server processes SASL mappings in reverse ASCII order. In the past two example, then the cn=z mapping (the first example) is processed first. If there is no match, the server processes the cn=y mapping (the second example). Note SASL mappings can be added when an instance is created during a silent installation by specifying the mappings in an LDIF file and adding the LDIF file with the ConfigFile directive. Using silent installation is described in the Installation Guide . 9.10.2. Default SASL Mappings for Directory Server The Directory Server has pre-defined SASL mapping rules to handle some of the most common usage. Kerberos UID Mapping This matches a Kerberos principal using a two part realm, such as user @example.com . The realm is then used to define the search base, and the user ID ( authid ) defines the filter. The search base is dc=example,dc=com and the filter of (uid=user) . RFC 2829 DN Syntax This mapping matches an authid that is a valid DN (defined in RFC 2829) prefixed by dn: . The authid maps directly to the specified DN. RFC 2829 U Syntax This mapping matches an authid that is a UID prefixed by u: . The value specified after the prefix defines a filter of (uid=value) . The search base is hard-coded to be the suffix of the default userRoot database. UID Mapping This mapping matches an authid that is any plain string that does not match the other default mapping rules. It use this value to define a filter of (uid=value) . The search base is hard-coded to be the suffix of the default userRoot database. 9.10.3. Configuring SASL Identity Mapping (Simple Authentication and Security Layer) SASL identity mapping can be configured from either the Directory Server or the command line. For SASL identity mapping to work for SASL authentication, the mapping must return one, and only one, entry that matches and Kerberos must be configured on the host machine. 9.10.3.1. Configuring SASL Identity Mapping Using the Command Line To configure SASL identity mapping from the command line, use the dsconf utility to add the identity mapping scheme. Add the identity mapping scheme. For example: This matches any user's common name and maps it to the result of the subtree search with base ou=People,dc=example,dc=com , based on the filter cn= userId . Restart the instance: Note Adding the SASL map with dsconf adds the mapping to the end of the list, regardless of its ASCII order. 9.10.3.2. Configuring SASL Identity Mapping Using the Web Console To add a SASL identity mapping scheme: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select SASL Settings & Mappings . Click Create New Mapping . Fill the form. For example: Click Save . 9.10.4. Enabling SASL Mapping Fallback Using the default settings, Directory Server verifies only the first matching SASL mapping. If this first matching mapping fails, the bind operation fails and no further matching mappings are verified. However, you can configure Directory Server to verify all matching mappings by enabling the nsslapd-sasl-mapping-fallback parameter: If fallback is enabled and only one user identity is returned, the bind succeeds. If no user, or more than one user is returned, the bind fails. 9.10.4.1. Setting SASL Mapping Priorities If you enabled SASL mapping fallback using the nsslapd-sasl-mapping-fallback attribute, you can optionally set the nsSaslMapPriority attribute in mapping configurations to prioritize them. The nsSaslMapPriority attribute supports values from 1 (highest priority) to 100 (lowest priority). The default is 100 . For example, to set the highest priority for the cn=Kerberos uid mapping,cn=mapping,cn=sasl,cn=config mapping: | [
"dn: cn=sasl,cn=config objectClass: top objectClass: nsContainer cn: sasl",
"dn: cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsContainer cn: mapping",
"dn: cn=mymap,cn=mapping,cn=sasl,cn=config objectclass:top objectclass:nsSaslMapping cn: mymap nsSaslMapRegexString: \\(.*\\)@\\(.*\\)\\.\\(.*\\) nsSaslMapFilterTemplate: (objectclass=inetOrgPerson) nsSaslMapBaseDNTemplate: uid=\\1,ou=people,dc=\\2,dc=\\3",
"dn: cn=example map,cn=mapping,cn=sasl,cn=config objectclass: top objectclass: nsSaslMapping cn: example map nsSaslMapRegexString: \\(.*\\) nsSaslMapBaseDNTemplate: ou=People,dc=example,dc=com nsSaslMapFilterTemplate: (cn=\\1)",
"dn: cn=example map,cn=mapping,cn=sasl,cn=config objectclass: top objectclass: nsSaslMapping cn: example map nsSaslMapRegexString: \\(.*\\) @US.EXAMPLE.COM nsSaslMapBaseDNTemplate: ou=People,dc=example,dc=com nsSaslMapFilterTemplate: (cn=\\1)",
"dn: cn=z,cn=mapping,cn=sasl,cn=config objectclass: top objectclass: nsSaslMapping cn: z nsSaslMapRegexString: ldap/[email protected] nsSaslMapBaseDNTemplate: cn=replication manager,cn=config nsSaslMapFilterTemplate: (objectclass=*)",
"dn: cn=y,cn=mapping,cn=sasl,cn=config objectclass: top objectclass: nsSaslMapping cn: y nsSaslMapRegexString: ldap/ldap1.example.com nsSaslMapBaseDNTemplate: cn=replication manager,cn=config nsSaslMapFilterTemplate: (objectclass=*)",
"dn: cn=Kerberos uid mapping,cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsSaslMapping cn: Kerberos uid mapping nsSaslMapRegexString: \\(.*\\)@\\(.*\\)\\.\\(.*\\) nsSaslMapBaseDNTemplate: dc=\\2,dc=\\3 nsSaslMapFilterTemplate: (uid=\\1)",
"dn: cn=rfc 2829 dn syntax,cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsSaslMapping cn: rfc 2829 dn syntax nsSaslMapRegexString: ^dn:\\(.*\\) nsSaslMapBaseDNTemplate: \\1 nsSaslMapFilterTemplate: (objectclass=*)",
"dn: cn=rfc 2829 u syntax,cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsSaslMapping cn: rfc 2829 u syntax nsSaslMapRegexString: ^u:\\(.*\\) nsSaslMapBaseDNTemplate: dc=example,dc=com nsSaslMapFilterTemplate: (uid=\\1)",
"dn: cn=uid mapping,cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsSaslMapping cn: uid mapping nsSaslMapRegexString: ^[^:@]+USD nsSaslMapBaseDNTemplate: dc=example,dc=com nsSaslMapFilterTemplate: (uid=&)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com sasl create --cn \" example_map \" --nsSaslMapRegexString \" \\(.*\\) \" --nsSaslMapBaseDNTemplate \" ou=People,dc=example,dc=com \" --nsSaslMapFilterTemplate \" (cn=\\1) \" --nsSaslMapPriority 50 Successfully created example_map",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-sasl-mapping-fallback=on Successfully replaced \"nsslapd-sasl-mapping-fallback\"",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=Kerberos uid mapping,cn=mapping,cn=sasl,cn=config changetype: modify replace: nsSaslMapPriority nsSaslMapPriority: 1"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/sasl |
8.10. alsa-utils | 8.10. alsa-utils 8.10.1. RHBA-2014:1603 - alsa-utils bug fix update Updated alsa-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The alsa-utils packages containn command line utilities for the Advanced Linux Sound Architecture (ALSA). Bug Fix BZ# 1072956 The alsa-utils packages have been updated with various upstream fixes to improve stability and usage. Users of alsa-utils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/alsa-utils |
Chapter 6. CronJob [batch/v1] | Chapter 6. CronJob [batch/v1] Description CronJob represents the configuration of a single cron job. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CronJobSpec describes how the job execution will look like and when it will actually run. status object CronJobStatus represents the current state of a cron job. 6.1.1. .spec Description CronJobSpec describes how the job execution will look like and when it will actually run. Type object Required schedule jobTemplate Property Type Description concurrencyPolicy string Specifies how to treat concurrent executions of a Job. Valid values are: - "Allow" (default): allows CronJobs to run concurrently; - "Forbid": forbids concurrent runs, skipping run if run hasn't finished yet; - "Replace": cancels currently running job and replaces it with a new one Possible enum values: - "Allow" allows CronJobs to run concurrently. - "Forbid" forbids concurrent runs, skipping run if hasn't finished yet. - "Replace" cancels currently running job and replaces it with a new one. failedJobsHistoryLimit integer The number of failed finished jobs to retain. Value must be non-negative integer. Defaults to 1. jobTemplate object JobTemplateSpec describes the data a Job should have when created from a template schedule string The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron . startingDeadlineSeconds integer Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones. successfulJobsHistoryLimit integer The number of successful finished jobs to retain. Value must be non-negative integer. Defaults to 3. suspend boolean This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false. timeZone string The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones . If not specified, this will default to the time zone of the kube-controller-manager process. The set of valid time zone names and the time zone offset is loaded from the system-wide time zone database by the API server during CronJob validation and the controller manager during execution. If no system-wide time zone database can be found a bundled version of the database is used instead. If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration, the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone. More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones This is beta field and must be enabled via the CronJobTimeZone feature gate. 6.1.2. .spec.jobTemplate Description JobTemplateSpec describes the data a Job should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object JobSpec describes how the job execution will look like. 6.1.3. .spec.jobTemplate.spec Description JobSpec describes how the job execution will look like. Type object Required template Property Type Description activeDeadlineSeconds integer Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again. backoffLimit integer Specifies the number of retries before marking this job failed. Defaults to 6 completionMode string CompletionMode specifies how Pod completions are tracked. It can be NonIndexed (default) or Indexed . NonIndexed means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other. Indexed means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is Indexed , .spec.completions must be specified and .spec.parallelism must be less than or equal to 10^5. In addition, The Pod name takes the form USD(job-name)-USD(index)-USD(random-string) , the Pod hostname takes the form USD(job-name)-USD(index) . More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job. completions integer Specifies the desired number of successfully finished pods the job should be run with. Setting to nil means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ manualSelector boolean manualSelector controls generation of pod labels and pod selectors. Leave manualSelector unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see manualSelector=true in jobs that were created with the old extensions/v1beta1 API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector parallelism integer Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ podFailurePolicy object PodFailurePolicy describes how failed pods influence the backoffLimit. selector LabelSelector A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors suspend boolean Suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false. template PodTemplateSpec Describes the pod that will be created when executing a job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ ttlSecondsAfterFinished integer ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. 6.1.4. .spec.jobTemplate.spec.podFailurePolicy Description PodFailurePolicy describes how failed pods influence the backoffLimit. Type object Required rules Property Type Description rules array A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. rules[] object PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of OnExitCodes and onPodConditions, but not both, can be used in each rule. 6.1.5. .spec.jobTemplate.spec.podFailurePolicy.rules Description A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. Type array 6.1.6. .spec.jobTemplate.spec.podFailurePolicy.rules[] Description PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of OnExitCodes and onPodConditions, but not both, can be used in each rule. Type object Required action Property Type Description action string Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all running pods are terminated. - Ignore: indicates that the counter towards the .backoffLimit is not incremented and a replacement pod is created. - Count: indicates that the pod is handled in the default way - the counter towards the .backoffLimit is incremented. Additional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule. Possible enum values: - "Count" This is an action which might be taken on a pod failure - the pod failure is handled in the default way - the counter towards .backoffLimit, represented by the job's .status.failed field, is incremented. - "FailJob" This is an action which might be taken on a pod failure - mark the pod's job as Failed and terminate all running pods. - "Ignore" This is an action which might be taken on a pod failure - the counter towards .backoffLimit, represented by the job's .status.failed field, is not incremented and a replacement pod is created. onExitCodes object PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. onPodConditions array Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. onPodConditions[] object PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. 6.1.7. .spec.jobTemplate.spec.podFailurePolicy.rules[].onExitCodes Description PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. Type object Required operator values Property Type Description containerName string Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template. operator string Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is in the set of specified values. - NotIn: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is not in the set of specified values. Additional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied. Possible enum values: - "In" - "NotIn" values array (integer) Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed. 6.1.8. .spec.jobTemplate.spec.podFailurePolicy.rules[].onPodConditions Description Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. Type array 6.1.9. .spec.jobTemplate.spec.podFailurePolicy.rules[].onPodConditions[] Description PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. Type object Required type status Property Type Description status string Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True. type string Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type. 6.1.10. .status Description CronJobStatus represents the current state of a cron job. Type object Property Type Description active array (ObjectReference) A list of pointers to currently running jobs. lastScheduleTime Time Information when was the last time the job was successfully scheduled. lastSuccessfulTime Time Information when was the last time the job successfully completed. 6.2. API endpoints The following API endpoints are available: /apis/batch/v1/cronjobs GET : list or watch objects of kind CronJob /apis/batch/v1/watch/cronjobs GET : watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/cronjobs DELETE : delete collection of CronJob GET : list or watch objects of kind CronJob POST : create a CronJob /apis/batch/v1/watch/namespaces/{namespace}/cronjobs GET : watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} DELETE : delete a CronJob GET : read the specified CronJob PATCH : partially update the specified CronJob PUT : replace the specified CronJob /apis/batch/v1/watch/namespaces/{namespace}/cronjobs/{name} GET : watch changes to an object of kind CronJob. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status GET : read status of the specified CronJob PATCH : partially update status of the specified CronJob PUT : replace status of the specified CronJob 6.2.1. /apis/batch/v1/cronjobs Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind CronJob Table 6.2. HTTP responses HTTP code Reponse body 200 - OK CronJobList schema 401 - Unauthorized Empty 6.2.2. /apis/batch/v1/watch/cronjobs Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/batch/v1/namespaces/{namespace}/cronjobs Table 6.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CronJob Table 6.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CronJob Table 6.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK CronJobList schema 401 - Unauthorized Empty HTTP method POST Description create a CronJob Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body CronJob schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 202 - Accepted CronJob schema 401 - Unauthorized Empty 6.2.4. /apis/batch/v1/watch/namespaces/{namespace}/cronjobs Table 6.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. Table 6.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} Table 6.18. Global path parameters Parameter Type Description name string name of the CronJob namespace string object name and auth scope, such as for teams and projects Table 6.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CronJob Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.21. Body parameters Parameter Type Description body DeleteOptions schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CronJob Table 6.23. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CronJob Table 6.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.25. Body parameters Parameter Type Description body Patch schema Table 6.26. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CronJob Table 6.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.28. Body parameters Parameter Type Description body CronJob schema Table 6.29. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty 6.2.6. /apis/batch/v1/watch/namespaces/{namespace}/cronjobs/{name} Table 6.30. Global path parameters Parameter Type Description name string name of the CronJob namespace string object name and auth scope, such as for teams and projects Table 6.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CronJob. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status Table 6.33. Global path parameters Parameter Type Description name string name of the CronJob namespace string object name and auth scope, such as for teams and projects Table 6.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CronJob Table 6.35. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CronJob Table 6.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.37. Body parameters Parameter Type Description body Patch schema Table 6.38. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CronJob Table 6.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.40. Body parameters Parameter Type Description body CronJob schema Table 6.41. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/workloads_apis/cronjob-batch-v1 |
33.5. Managing Dynamic DNS Updates | 33.5. Managing Dynamic DNS Updates 33.5.1. Enabling Dynamic DNS Updates Dynamic DNS updates are disabled by default for new DNS zones in IdM. With dynamic updates disabled, the ipa-client-install script cannot add a DNS record pointing to the new client. Note Enabling dynamic updates can potentially pose a security risk. However, if enabling dynamic updates is acceptable in your environment, you can do it to make client installations easier. Enabling dynamic updates requires the following: The DNS zone must be configured to allow dynamic updates The local clients must be configured to send dynamic updates 33.5.1.1. Configuring the DNS Zone to Allow Dynamic Updates Enabling Dynamic DNS Updates in the Web UI Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.16. DNS Zone Management Click on the zone name in the list of all zones to open the DNS zone page. Figure 33.17. Editing a Master Zone Click Settings to switch to the DNS zone settings tab. Figure 33.18. The Settings Tab in the Master Zone Edit Page Scroll down to the Dynamic update field, and set the value to True . Figure 33.19. Enabling Dynamic DNS Updates Click Save at the top of the page to confirm the new configuration. Enabling Dynamic DNS Updates from the Command Line To allow dynamic updates to the DNS zones from the command line, use the ipa dnszone-mod command with the --dynamic-update=TRUE option. For example: 33.5.1.2. Configuring the Clients to Send Dynamic Updates Clients are automatically set up to send DNS updates when they are enrolled in the domain, by using the --enable-dns-updates option with the ipa-client-install script. The DNS zone has a time to live (TTL) value set for records within its SOA configuration. However, the TTL for the dynamic updates is managed on the local system by the System Security Service Daemon (SSSD). To change the TTL value for the dynamic updates, edit the SSSD file to set a value; the default is 1200 seconds. Open the SSSD configuration file. Find the domain section for the IdM domain. If dynamic updates have not been enabled for the client, then set the dyndns_update value to true. Add or edit the dyndns_ttl parameter to set the value, in seconds. 33.5.2. Synchronizing A/AAAA and PTR Records A and AAAA records are configured separately from PTR records in reverse zones. Because these records are configured independently, it is possible for A/AAAA records to exist without corresponding PTR records, and vice versa. There are some DNS setting requirements for PTR synchronization to work: Both forward and reverse zones must be managed by the IdM server. Both zones must have dynamic updates enabled. Enabling dynamic updates is covered in Section 33.5.1, "Enabling Dynamic DNS Updates" . PTR synchronization must be enabled for the master forward and reverse zone. The PTR record will be updated only if the name of the requesting client matches the name in the PTR record. Important Changes made through the IdM web UI, through the IdM command-line tools, or by editing the LDAP entry directly do not update the PTR record. Only changes made by the DNS service itself trigger PTR record synchronization. Warning A client system can update its own IP address. This means that a compromised client can be used to overwrite PTR records by changing its IP address. 33.5.2.1. Configuring PTR Record Synchronization in the Web UI Note that PTR record synchronization must be configured on the zone where A or AAAA records are stored, not on the reverse DNS zone where PTR records are located. Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.20. DNS Zone Management Click on the zone name in the list of all zones to open the DNS zone page. Figure 33.21. Editing a DNS Zone Click Settings to switch to the DNS zone settings tab. Figure 33.22. The Settings Tab in the Master Zone Edit Page Select the Allow PTR sync check box. Figure 33.23. Enabling PTR Synchronization Click Save at the top of the page to confirm the new configuration. 33.5.2.2. Configuring PTR Record Synchronization Using the Command Line You can configure PTR record synchronization either for a specific zone or globally for all zones using the command line. 33.5.2.2.1. Configuring PTR Record Synchronization for a Specific Zone For example, to configure PTR record synchronization for the idm.example.com forward zone: Enable dynamic updates for the forward zone: Configure the update policy of the forward zone: Enable PTR Record synchronization for the forward zone: Enable dynamic updates for the reverse zone: 33.5.2.2.2. Configuring PTR Record Synchronization Globally for all Zones You can enable PTR synchronization for all zones managed by IdM using one of the following procedures: To enable PTR synchronization for all zones on all servers at the same time: To enable the synchronization per-server: Add the sync_ptr yes; setting to the dyndb "ipa" "/usr/lib64/bind/ldap.so" section in the /etc/named.conf file: Restart IdM: Repeat the steps on each IdM server with a DNS service installed. 33.5.3. Updating DNS Dynamic Update Policies DNS domains maintained by IdM servers can accept a DNS dynamic update according to RFC 3007 [4] . The rules that determine which records can be modified by a specific client follow the same syntax as the update-policy statement in the /etc/named.conf file. For more information on dynamic update policies, see the BIND 9 documentation . Note that if dynamic DNS updates are disabled for the DNS zone, all DNS updates are declined without reflecting the dynamic update policy statement. For information on enabling dynamic DNS updates, see Section 33.5.1, "Enabling Dynamic DNS Updates" . Updating DNS Update Policies in the Web UI Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.24. DNS Zone Management Click on the zone name in the list of all zones to open the DNS zone page. Figure 33.25. Editing a DNS Zone Click Settings to switch to the DNS zone settings tab. Figure 33.26. The Settings Tab in the Master Zone Edit Page Set the required update policies in a semi-colon separated list in the BIND update policy text box. Figure 33.27. DNS Update Policy Settings Click Save at the top of the DNS zone page to confirm the new configuration. Updating DNS Update Policies from the Command Line To set the DNS update policy from the command line, use the --update-policy option and add the access control rule in a statement after the option. For example: [4] For the full text of RFC 3007, see http://tools.ietf.org/html/rfc3007 | [
"[user@server ~]USD ipa dnszone-mod server.example.com --dynamic-update=TRUE",
"ipa-client-install --enable-dns-updates",
"vim /etc/sssd/sssd.conf",
"[domain/ipa.example.com]",
"dyndns_update = true",
"dyndns_ttl = 2400",
"ipa dnszone-mod idm.example.com. --dynamic-update=TRUE",
"ipa dnszone-mod idm.example.com. --update-policy='grant IDM.EXAMPLE.COM krb5-self * A; grant IDM.EXAMPLE.COM krb5-self * AAAA; grant IDM.EXAMPLE.COM krb5-self * SSHFP;'",
"ipa dnszone-mod idm.example.com. --allow-sync-ptr=True",
"ipa dnszone-mod 2.0.192.in-addr.arpa. --dynamic-update=TRUE",
"ipa dnsconfig-mod --allow-sync-ptr=true",
"dyndb \"ipa\" \"/usr/lib64/bind/ldap.so\" { sync_ptr yes; };",
"ipactl restart",
"ipa dnszone-mod zone.example.com --update-policy \"grant EXAMPLE.COM krb5-self * A; grant EXAMPLE.COM krb5-self * AAAA; grant EXAMPLE.COM krb5-self * SSHFP;\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-dynamic-dns-updates |
Operators | Operators OpenShift Container Platform 4.18 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.18",
"registry.redhat.io/redhat/redhat-operator-index:v4.18",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.31 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.31",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"apiVersion: v1 kind: Namespace metadata: name: team1-operator",
"oc create -f team1-operator.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1",
"oc create -f team1-operatorgroup.yaml",
"apiVersion: v1 kind: Namespace metadata: name: global-operators",
"oc create -f global-operators.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators",
"oc create -f global-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get csvs -n openshift",
"oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF",
"oc get events",
"LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token 1 metadata: name: scoped namespace: scoped annotations: kubernetes.io/service-account.name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped 1 targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: scoped spec: channel: stable-v1 name: openshift-cert-manager-operator source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.18 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.18 --tag mirror.example.com/abc/abc-redhat-operator-index:4.18.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.18",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.18 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.18 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.18] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.18 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.18",
"opm migrate <registry_image> <fbc_directory>",
"opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.18",
"opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.18 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest",
"apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge",
"grpcPodConfig: nodeSelector: custom_label: <label>",
"grpcPodConfig: priorityClassName: <priority_class>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical",
"grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"go 1.22.0 github.com/onsi/ginkgo/v2 v2.17.1 github.com/onsi/gomega v1.32.0 k8s.io/api v0.30.1 k8s.io/apimachinery v0.30.1 k8s.io/client-go v0.30.1 sigs.k8s.io/controller-runtime v0.18.4",
"go mod tidy",
"- ENVTEST_K8S_VERSION = 1.29.0 + ENVTEST_K8S_VERSION = 1.30.0",
"- KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) - ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) - GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + KUSTOMIZE ?= USD(LOCALBIN)/kustomize + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen + ENVTEST ?= USD(LOCALBIN)/setup-envtest + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint",
"- KUSTOMIZE_VERSION ?= v5.3.0 - CONTROLLER_TOOLS_VERSION ?= v0.14.0 - ENVTEST_VERSION ?= release-0.17 - GOLANGCI_LINT_VERSION ?= v1.57.2 + KUSTOMIZE_VERSION ?= v5.4.2 + CONTROLLER_TOOLS_VERSION ?= v0.15.0 + ENVTEST_VERSION ?= release-0.18 + GOLANGCI_LINT_VERSION ?= v1.59.1",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- @[ -f USD(1) ] || { + @[ -f \"USD(1)-USD(3)\" ] || { echo \"Downloading USDUSD{package}\" ; + rm -f USD(1) || true ; - mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; - } + mv USD(1) USD(1)-USD(3) ; + } ; + ln -sf USD(1)-USD(3) USD(1)",
"- exportloopref + - ginkgolinter - prealloc + - revive + + linters-settings: + revive: + rules: + - name: comment-spacings",
"- FROM golang:1.21 AS builder + FROM golang:1.22 AS builder",
"\"sigs.k8s.io/controller-runtime/pkg/log/zap\" + \"sigs.k8s.io/controller-runtime/pkg/metrics/filters\" var enableHTTP2 bool - flag.StringVar(&metricsAddr, \"metrics-bind-address\", \":8080\", \"The address the metric endpoint binds to.\") + var tlsOpts []func(*tls.Config) + flag.StringVar(&metricsAddr, \"metrics-bind-address\", \"0\", \"The address the metrics endpoint binds to. \"+ + \"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.\") flag.StringVar(&probeAddr, \"health-probe-bind-address\", \":8081\", \"The address the probe endpoint binds to.\") flag.BoolVar(&enableLeaderElection, \"leader-elect\", false, \"Enable leader election for controller manager. \"+ \"Enabling this will ensure there is only one active controller manager.\") - flag.BoolVar(&secureMetrics, \"metrics-secure\", false, - \"If set the metrics endpoint is served securely\") + flag.BoolVar(&secureMetrics, \"metrics-secure\", true, + \"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.\") - tlsOpts := []func(*tls.Config){} + // Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server. + // More info: + // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/server + // - https://book.kubebuilder.io/reference/metrics.html + metricsServerOptions := metricsserver.Options{ + BindAddress: metricsAddr, + SecureServing: secureMetrics, + // TODO(user): TLSOpts is used to allow configuring the TLS config used for the server. If certificates are + // not provided, self-signed certificates will be generated by default. This option is not recommended for + // production environments as self-signed certificates do not offer the same level of trust and security + // as certificates issued by a trusted Certificate Authority (CA). The primary risk is potentially allowing + // unauthorized access to sensitive metrics data. Consider replacing with CertDir, CertName, and KeyName + // to provide certificates, ensuring the server communicates using trusted and secure certificates. + TLSOpts: tlsOpts, + } + + if secureMetrics { + // FilterProvider is used to protect the metrics endpoint with authn/authz. + // These configurations ensure that only authorized users and service accounts + // can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info: + // https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/filters#WithAuthenticationAndAuthorization + metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization + } + mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ - Scheme: scheme, - Metrics: metricsserver.Options{ - BindAddress: metricsAddr, - SecureServing: secureMetrics, - TLSOpts: tlsOpts, - }, + Scheme: scheme, + Metrics: metricsServerOptions,",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.18",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:6789",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4.18",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"",
"// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"volumeMounts: - name: bound-sa-token mountPath: /var/run/secrets/openshift/serviceaccount readOnly: true volumes: # This service account token can be used to provide identity outside the cluster. - name: bound-sa-token projected: sources: - serviceAccountToken: path: token audience: openshift",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-gcp: \"true\"",
"// Get ENV var audience := os.Getenv(\"AUDIENCE\") serviceAccountEmail := os.Getenv(\"SERVICE_ACCOUNT_EMAIL\") gcpIdentityTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.GCPProviderSpec.Audience = audience credReqTemplate.Spec.GCPProviderSpec.ServiceAccountEmail = serviceAccountEmail credReqTemplate.CloudTokenPath = gcpIdentityTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"service_account_json := secret.StringData[\"service_account.json\"]",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.38.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.38.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 }, ] }",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/operators/index |
Chapter 1. Troubleshooting | Chapter 1. Troubleshooting Before using the Troubleshooting guide, you can run the oc adm must-gather command to gather details, logs, and take steps in debugging issues. For more details, see Running the must-gather command to troubleshoot . Additionally, check your role-based access. See Role-based access control for details. 1.1. Documented troubleshooting View the list of troubleshooting topics for Red Hat Advanced Cluster Management for Kubernetes: Installation To view the main documentation for the installing tasks, see Installing and upgrading . Troubleshooting installation status stuck in installing or pending Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade Backup and restore To view the main documentation for backup and restore, see Backup and restore . Troubleshooting restore status finishes with errors Cluster management To view the main documentation about managing your clusters, see The multicluster engine operator cluster lifecycle overview . Troubleshooting an offline cluster Troubleshooting a managed cluster import failure Troubleshooting cluster with pending import status Troubleshooting imported clusters offline after certificate change Troubleshooting cluster status changing from offline to available Troubleshooting cluster creation on VMware vSphere Troubleshooting cluster in console with pending or failed status Troubleshooting Klusterlet with degraded conditions Troubleshooting Object storage channel secret Namespace remains after deleting a cluster Auto-import-secret-exists error when importing a cluster Troubleshooting the cinder Container Storage Interface (CSI) driver for VolSync Troubleshooting cluster curator automatic template failure to deploy multicluster global hub To view the main documentation about the multicluster global hub, see multicluster global hub . Troubleshooting with the must-gather command Troubleshooting by accessing the PostgreSQL database Troubleshooting by using the database dump and restore Application management To view the main documentation about application management, see Managing applications . Troubleshooting application Kubernetes deployment version Troubleshooting local cluster not selected Governance Troubleshooting multiline YAML parsing To view the security guide, see Security overview . Console observability Console observability includes Search, along with header and navigation function. To view the observability guide, see Observability in the console . Troubleshooting grafana Troubleshooting observability Troubleshooting OpenShift monitoring services Troubleshooting metrics-collector Troubleshooting PostgreSQL shared memory error Troubleshooting a block error for Thanos compactor Submariner networking and service discovery This section lists the Submariner troubleshooting procedures that can occur when using Submariner with Red Hat Advanced Cluster Management or multicluster engine operator. For general Submariner troubleshooting information, see Troubleshooting in the Submariner documentation. To view the main documentation for the Submariner networking service and service discovery, see Submariner multicluster networking and service discovery . Troubleshooting Submariner not connecting after installation - general information Troubleshooting Submariner add-on status is degraded Troubleshooting Submariner end-to-end test failures 1.2. Running the must-gather command to troubleshoot To get started with troubleshooting, learn about the troubleshooting scenarios for users to run the must-gather command to debug the issues, then see the procedures to start using the command. Required access: Cluster administrator 1.2.1. Must-gather scenarios Scenario one: Use the Documented troubleshooting section to see if a solution to your problem is documented. The guide is organized by the major functions of the product. With this scenario, you check the guide to see if your solution is in the documentation. For instance, for trouble with creating a cluster, you might find a solution in the Manage cluster section. Scenario two: If your problem is not documented with steps to resolve, run the must-gather command and use the output to debug the issue. Scenario three: If you cannot debug the issue using your output from the must-gather command, then share your output with Red Hat Support. 1.2.2. Must-gather procedure See the following procedure to start using the must-gather command: Learn about the must-gather command and install the prerequisites that you need at Gathering data about your cluster in the Red Hat OpenShift Container Platform documentation. Log in to your cluster. Add the Red Hat Advanced Cluster Management for Kubernetes image that is used for gathering data and the directory. Run the following command, where you insert the image and the directory for the output: For the usual use-case, you should run the must-gather while you are logged into your hub cluster. Note: If you want to check your managed clusters, find the gather-managed.log file that is located in the cluster-scoped-resources directory: Check for managed clusters that are not set True for the JOINED and AVAILABLE column. You can run the must-gather command on those clusters that are not connected with True status. Go to your specified directory to see your output, which is organized in the following levels: Two peer levels: cluster-scoped-resources and namespace resources. Sub-level for each: API group for the custom resource definitions for both cluster-scope and namespace-scoped resources. level for each: YAML file sorted by kind . 1.2.3. Must-gather in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment: In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install in disconnected network environments . Run the following commands to collect all of the information, replacing <2.x> with the supported version for both <acm-must-gather> , for example 2.10 , and <multicluster-engine/must-gather> , for example 2.5 . If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot further, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your Red Hat credentials. 1.2.4. Must-gather for a hosted cluster If you experience issues with hosted control plane clusters, you can run the must-gather command to gather information to help you with troubleshooting. 1.2.4.1. About the must-gather command for hosted clusters The command generates output for the managed cluster and the hosted cluster. Data from the multicluster engine operator hub cluster: Cluster-scoped resources: These resources are node definitions of the management cluster. The hypershift-dump compressed file: This file is useful if you need to share the content with other people. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Network logs: These logs include the OVN northbound and southbound databases and the status for each one. Hosted clusters: This level of output involves all of the resources inside of the hosted cluster. Data from the hosted cluster: Cluster-scoped resources: These resources include all of the cluster-wide objects, such as nodes and CRDs. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Although the output does not contain any secret objects from the cluster, it can contain references to the names of the secrets. 1.2.4.2. Prerequisites To gather information by running the must-gather command, you must meet the following prerequisites: You must ensure that the kubeconfig file is loaded and is pointing to the multicluster engine operator hub cluster. You must have cluster-admin access to the multicluster engine operator hub cluster. You must have the name value for the HostedCluster resource and the namespace where the custom resource is deployed. 1.2.4.3. Entering the must-gather command for hosted clusters Enter the following command to collect information about the hosted cluster. In the command, the hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE parameter is optional; if you do not include it, the command runs as though the hosted cluster is in the default namespace, which is clusters . To save the results of the command to a compressed file, include the --dest-dir=NAME parameter, replacing NAME with the name of the directory where you want to save the results: 1.2.4.4. Entering the must-gather command in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment: In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install in disconnected network environments . Run the following command to extract logs, which reference the image from their mirror registry: 1.2.4.5. Additional resources For more information about troubleshooting hosted control planes, see Troubleshooting hosted control planes in the OpenShift Container Platform documentation. 1.3. Troubleshooting installation status stuck in installing or pending When installing Red Hat Advanced Cluster Management, the MultiClusterHub remains in Installing phase, or multiple pods maintain a Pending status. 1.3.1. Symptom: Stuck in Pending status More than ten minutes passed since you installed MultiClusterHub and one or more components from the status.components field of the MultiClusterHub resource report ProgressDeadlineExceeded . Resource constraints on the cluster might be the issue. Check the pods in the namespace where Multiclusterhub was installed. You might see Pending with a status similar to the following: In this case, the worker nodes resources are not sufficient in the cluster to run the product. 1.3.2. Resolving the problem: Adjust worker node sizing If you have this problem, then your cluster needs to be updated with either larger or more worker nodes. See Sizing your cluster for guidelines on sizing your cluster. 1.4. Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade After you upgrade from 2.7.x to 2.8.x and then to 2.9.0, the ocm-controller of the multicluster-engine namespace crashes. 1.4.1. Symptom: Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade After you attempt to list ManagedClusterSet and ManagedClusterSetBinding custom resource definitions, the following error message appears: Error from server: request to convert CR from an invalid group/version: cluster.open-cluster-management.io/v1beta1 The message indicates that the migration of ManagedClusterSets and ManagedClusterSetBindings custom resource definitions from v1beta1 to v1beta2 failed. 1.4.2. Resolving the problem: Troubleshooting ocm-controller errors after Red Hat Advanced Cluster Management upgrade To resolve this error, you must initiate the API migration manually. Complete the following steps: Revert the cluster-manager to a release. Pause the multiclusterengine with the following command: oc annotate mce multiclusterengine pause=true Run the following commands to replace the image of the cluster-manager deployment with a version: oc patch deployment cluster-manager -n multicluster-engine -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"registration-operator","image":"registry.redhat.io/multicluster-engine/registration-operator-rhel8@sha256:35999c3a1022d908b6fe30aa9b85878e666392dbbd685e9f3edcb83e3336d19f"}]}}}}' export ORIGIN_REGISTRATION_IMAGE=USD(oc get clustermanager cluster-manager -o jsonpath='{.spec.registrationImagePullSpec}') Replace the registration image reference in the ClusterManager resource with a version. Run the following command: oc patch clustermanager cluster-manager --type='json' -p='[{"op": "replace", "path": "/spec/registrationImagePullSpec", "value": "registry.redhat.io/multicluster-engine/registration-rhel8@sha256:a3c22aa4326859d75986bf24322068f0aff2103cccc06e1001faaf79b9390515"}]' Run the following commands to revert the ManagedClusterSets and ManagedClusterSetBindings custom resource definitions to a release: oc annotate crds managedclustersets.cluster.open-cluster-management.io operator.open-cluster-management.io/version- oc annotate crds managedclustersetbindings.cluster.open-cluster-management.io operator.open-cluster-management.io/version- Restart the cluster-manager and wait for the custom resource definitions to be recreated. Run the following commands: oc -n multicluster-engine delete pods -l app=cluster-manager oc wait crds managedclustersets.cluster.open-cluster-management.io --for=jsonpath="{.metadata.annotations['operator\.open-cluster-management\.io/version']}"="2.3.3" --timeout=120s oc wait crds managedclustersetbindings.cluster.open-cluster-management.io --for=jsonpath="{.metadata.annotations['operator\.open-cluster-management\.io/version']}"="2.3.3" --timeout=120s Start the storage version migration with the following commands: oc patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' -p='[{"op":"replace", "path":"/spec/resource/version", "value":"v1beta1"}]' oc patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' --subresource status -p='[{"op":"remove", "path":"/status/conditions"}]' oc patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' -p='[{"op":"replace", "path":"/spec/resource/version", "value":"v1beta1"}]' oc patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' --subresource status -p='[{"op":"remove", "path":"/status/conditions"}]' Run the following command to wait for the migration to complete: oc wait storageversionmigration managedclustersets.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s oc wait storageversionmigration managedclustersetbindings.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s Restore the cluster-manager back to Red Hat Advanced Cluster Management 2.12. It might take several minutes. Run the following command: oc annotate mce multiclusterengine pause- oc patch clustermanager cluster-manager --type='json' -p='[{"op": "replace", "path": "/spec/registrationImagePullSpec", "value": "'USDORIGIN_REGISTRATION_IMAGE'"}]' 1.4.2.1. Verification To verify that Red Hat Advanced Cluster Management is recovered run the following commands: oc get managedclusterset oc get managedclustersetbinding -A After running the commands, the ManagedClusterSets and ManagedClusterSetBindings resources are listed without error messages. 1.5. Troubleshooting an offline cluster There are a few common causes for a cluster showing an offline status. 1.5.1. Symptom: Cluster status is offline After you complete the procedure for creating a cluster, you cannot access it from the Red Hat Advanced Cluster Management console, and it shows a status of offline . 1.5.2. Resolving the problem: Cluster status is offline Determine if the managed cluster is available. You can check this in the Clusters area of the Red Hat Advanced Cluster Management console. If it is not available, try restarting the managed cluster. If the managed cluster status is still offline, complete the following steps: Run the oc get managedcluster <cluster_name> -o yaml command on the hub cluster. Replace <cluster_name> with the name of your cluster. Find the status.conditions section. Check the messages for type: ManagedClusterConditionAvailable and resolve any problems. 1.6. Troubleshooting a managed cluster import failure If your cluster import fails, there are a few steps that you can take to determine why the cluster import failed. 1.6.1. Symptom: Imported cluster not available After you complete the procedure for importing a cluster, you cannot access it from the Red Hat Advanced Cluster Management for Kubernetes console. 1.6.2. Resolving the problem: Imported cluster not available There can be a few reasons why an imported cluster is not available after an attempt to import it. If the cluster import fails, complete the following steps, until you find the reason for the failed import: On the Red Hat Advanced Cluster Management hub cluster, run the following command to ensure that the Red Hat Advanced Cluster Management import controller is running. You should see two pods that are running. If either of the pods is not running, run the following command to view the log to determine the reason: On the Red Hat Advanced Cluster Management hub cluster, run the following command to determine if the managed cluster import secret was generated successfully by the Red Hat Advanced Cluster Management import controller: If the import secret does not exist, run the following command to view the log entries for the import controller and determine why it was not created: On the Red Hat Advanced Cluster Management hub cluster, if your managed cluster is local-cluster , provisioned by Hive, or has an auto-import secret, run the following command to check the import status of the managed cluster. If the condition ManagedClusterImportSucceeded is not true , the result of the command indicates the reason for the failure. Check the Klusterlet status of the managed cluster for a degraded condition. See Troubleshooting Klusterlet with degraded conditions to find the reason that the Klusterlet is degraded. 1.7. Troubleshooting cluster with pending import status If you receive Pending import continually on the console of your cluster, follow the procedure to troubleshoot the problem. 1.7.1. Symptom: Cluster with pending import status After importing a cluster by using the Red Hat Advanced Cluster Management console, the cluster appears in the console with a status of Pending import . 1.7.2. Identifying the problem: Cluster with pending import status Run the following command on the managed cluster to view the Kubernetes pod names that are having the issue: Run the following command on the managed cluster to find the log entry for the error: Replace registration_agent_pod with the pod name that you identified in step 1. Search the returned results for text that indicates there was a networking connectivity problem. Example includes: no such host . 1.7.3. Resolving the problem: Cluster with pending import status Retrieve the port number that is having the problem by entering the following command on the hub cluster: Ensure that the hostname from the managed cluster can be resolved, and that outbound connectivity to the host and port is occurring. If the communication cannot be established by the managed cluster, the cluster import is not complete. The cluster status for the managed cluster is Pending import . 1.8. Troubleshooting cluster with already exists error If you are unable to import an OpenShift Container Platform cluster into Red Hat Advanced Cluster Management MultiClusterHub and receive an AlreadyExists error, follow the procedure to troubleshoot the problem. 1.8.1. Symptom: Already exists error log when importing OpenShift Container Platform cluster An error log shows up when importing an OpenShift Container Platform cluster into Red Hat Advanced Cluster Management MultiClusterHub : 1.8.2. Identifying the problem: Already exists when importing OpenShift Container Platform cluster Check if there are any Red Hat Advanced Cluster Management-related resources on the cluster that you want to import to new the Red Hat Advanced Cluster Management MultiClusterHub by running the following commands: 1.8.3. Resolving the problem: Already exists when importing OpenShift Container Platform cluster Remove the klusterlet custom resource by using the following command: oc get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{"metadata":{"finalizers": []}}' Run the following commands to remove pre-existing resources: 1.9. Troubleshooting cluster creation on VMware vSphere If you experience a problem when creating a Red Hat OpenShift Container Platform cluster on VMware vSphere, see the following troubleshooting information to see if one of them addresses your problem. Note: Sometimes when the cluster creation process fails on VMware vSphere, the link is not enabled for you to view the logs. If this happens, you can identify the problem by viewing the log of the hive-controllers pod. The hive-controllers log is in the hive namespace. 1.9.1. Managed cluster creation fails with certificate IP SAN error 1.9.1.1. Symptom: Managed cluster creation fails with certificate IP SAN error After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails with an error message that indicates a certificate IP SAN error. 1.9.1.2. Identifying the problem: Managed cluster creation fails with certificate IP SAN error The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.9.1.3. Resolving the problem: Managed cluster creation fails with certificate IP SAN error Use the VMware vCenter server fully-qualified host name instead of the IP address in the credential. You can also update the VMware vCenter CA certificate to contain the IP SAN. 1.9.2. Managed cluster creation fails with unknown certificate authority 1.9.2.1. Symptom: Managed cluster creation fails with unknown certificate authority After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is signed by an unknown authority. 1.9.2.2. Identifying the problem: Managed cluster creation fails with unknown certificate authority The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.9.2.3. Resolving the problem: Managed cluster creation fails with unknown certificate authority Ensure you entered the correct certificate from the certificate authority when creating the credential. 1.9.3. Managed cluster creation fails with expired certificate 1.9.3.1. Symptom: Managed cluster creation fails with expired certificate After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because the certificate is expired or is not yet valid. 1.9.3.2. Identifying the problem: Managed cluster creation fails with expired certificate The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.9.3.3. Resolving the problem: Managed cluster creation fails with expired certificate Ensure that the time on your ESXi hosts is synchronized. 1.9.4. Managed cluster creation fails with insufficient privilege for tagging 1.9.4.1. Symptom: Managed cluster creation fails with insufficient privilege for tagging After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is insufficient privilege to use tagging. 1.9.4.2. Identifying the problem: Managed cluster creation fails with insufficient privilege for tagging The deployment of the managed cluster fails and returns the following errors in the deployment log: 1.9.4.3. Resolving the problem: Managed cluster creation fails with insufficient privilege for tagging Ensure that your VMware vCenter required account privileges are correct. See Image registry removed during information for more information. 1.9.5. Managed cluster creation fails with invalid dnsVIP 1.9.5.1. Symptom: Managed cluster creation fails with invalid dnsVIP After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an invalid dnsVIP. 1.9.5.2. Identifying the problem: Managed cluster creation fails with invalid dnsVIP If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform release image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.9.5.3. Resolving the problem: Managed cluster creation fails with invalid dnsVIP Select a release image from a later version of OpenShift Container Platform that supports VMware Installer Provisioned Infrastructure. 1.9.6. Managed cluster creation fails with incorrect network type 1.9.6.1. Symptom: Managed cluster creation fails with incorrect network type After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an incorrect network type specified. 1.9.6.2. Identifying the problem: Managed cluster creation fails with incorrect network type If you see the following message when trying to deploy a new managed cluster with VMware vSphere, it is because you have an older OpenShift Container Platform image that does not support VMware Installer Provisioned Infrastructure (IPI): 1.9.6.3. Resolving the problem: Managed cluster creation fails with incorrect network type Select a valid VMware vSphere network type for the specified VMware cluster. 1.9.7. Managed cluster creation fails with an error processing disk changes 1.9.7.1. Symptom: Adding the VMware vSphere managed cluster fails due to an error processing disk changes After creating a new Red Hat OpenShift Container Platform cluster on VMware vSphere, the cluster fails because there is an error when processing disk changes. 1.9.7.2. Identifying the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes A message similar to the following is displayed in the logs: 1.9.7.3. Resolving the problem: Adding the VMware vSphere managed cluster fails due to an error processing disk changes Use the VMware vSphere client to give the user All privileges for Profile-driven Storage Privileges . 1.10. Troubleshooting managed cluster creation fails on Red Hat OpenStack Platform with unknown authority error If you experience a problem when creating a Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform, see the following troubleshooting information to see if one of them addresses your problem. 1.10.1. Symptom: Managed cluster creation fails with unknown authority error After creating a new Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform using self-signed certificates, the cluster fails with an error message that indicates an unknown authority error. 1.10.2. Identifying the problem: Managed cluster creation fails with unknown authority error The deployment of the managed cluster fails and returns the following error message: x509: certificate signed by unknown authority 1.10.3. Resolving the problem: Managed cluster creation fails with unknown authority error Verify that the following files are configured correctly: The clouds.yaml file must specify the path to the ca.crt file in the cacert parameter. The cacert parameter is passed to the OpenShift installer when generating the ignition shim. See the following example: clouds: openstack: cacert: "/etc/pki/ca-trust/source/anchors/ca.crt" The certificatesSecretRef paremeter must reference a secret with a file name matching the ca.crt file. See the following example: spec: baseDomain: dev09.red-chesterfield.com clusterName: txue-osspoke platform: openstack: cloud: openstack credentialsSecretRef: name: txue-osspoke-openstack-creds certificatesSecretRef: name: txue-osspoke-openstack-certificatebundle To create a secret with a matching file name, run the following command: The size of the ca.cert file must be less than 63 thousand bytes. 1.11. Troubleshooting imported clusters offline after certificate change Installing a custom apiserver certificate is supported, but one or more clusters that were imported before you changed the certificate information are in offline status. 1.11.1. Symptom: Clusters offline after certificate change After you complete the procedure for updating a certificate secret, one or more of your clusters that were online now display offline status in the console. 1.11.2. Identifying the problem: Clusters offline after certificate change After updating the information for a custom API server certificate, clusters that were imported and running before the new certificate are now in an offline state. The errors that indicate that the certificate is the problem are found in the logs for the pods in the open-cluster-management-agent namespace of the offline managed cluster. The following examples are similar to the errors that are displayed in the logs: See the following work-agent log: See the following registration-agent log: 1.11.3. Resolving the problem: Clusters offline after certificate change If your managed cluster is the local-cluster , or your managed cluster was created by using Red Hat Advanced Cluster Management for Kubernetes, you must wait 10 minutes or longer to reimport your managed cluster. To reimport your managed cluster immediately, you can delete your managed cluster import secret on the hub cluster and reimport it by using Red Hat Advanced Cluster Management. Run the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. If you want to reimport a managed cluster that was imported by using Red Hat Advanced Cluster Management, complete the following steps to import the managed cluster again: On the hub cluster, recreate the managed cluster import secret by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the hub cluster, expose the managed cluster import secret to a YAML file by running the following command: Replace <cluster_name> with the name of the managed cluster that you want to import. On the managed cluster, apply the import.yaml file by running the following command: Note: The steps do not detach the managed cluster from the hub cluster. The steps update the required manifests with current settings on the managed cluster, including the new certificate information. 1.12. Namespace remains after deleting a cluster When you remove a managed cluster, the namespace is normally removed as part of the cluster removal process. In rare cases, the namespace remains with some artifacts in it. In that case, you must manually remove the namespace. 1.12.1. Symptom: Namespace remains after deleting a cluster After removing a managed cluster, the namespace is not removed. 1.12.2. Resolving the problem: Namespace remains after deleting a cluster Complete the following steps to remove the namespace manually: Run the following command to produce a list of the resources that remain in the <cluster_name> namespace: Replace cluster_name with the name of the namespace for the cluster that you attempted to remove. Delete each identified resource on the list that does not have a status of Delete by entering the following command to edit the list: Replace resource_kind with the kind of the resource. Replace resource_name with the name of the resource. Replace namespace with the name of the namespace of the resource. Locate the finalizer attribute in the in the metadata. Delete the non-Kubernetes finalizers by using the vi editor dd command. Save the list and exit the vi editor by entering the :wq command. Delete the namespace by entering the following command: Replace cluster-name with the name of the namespace that you are trying to delete. 1.13. Auto-import-secret-exists error when importing a cluster Your cluster import fails with an error message that reads: auto import secret exists. 1.13.1. Symptom: Auto import secret exists error when importing a cluster When importing a hive cluster for management, an auto-import-secret already exists error is displayed. 1.13.2. Resolving the problem: Auto-import-secret-exists error when importing a cluster This problem occurs when you attempt to import a cluster that was previously managed by Red Hat Advanced Cluster Management. When this happens, the secrets conflict when you try to reimport the cluster. To work around this problem, complete the following steps: To manually delete the existing auto-import-secret , run the following command on the hub cluster: Replace cluster-namespace with the namespace of your cluster. Import your cluster again by using the procedure in Cluster import introduction . 1.14. Troubleshooting the cinder Container Storage Interface (CSI) driver for VolSync If you use VolSync or use a default setting in a cinder Container Storage Interface (CSI) driver, you might encounter errors for the PVC that is in use. 1.14.1. Symptom: Volumesnapshot error state You can configure a VolSync ReplicationSource or ReplicationDestination to use snapshots. Also, you can configure the storageclass and volumesnapshotclass in the ReplicationSource and ReplicationDestination . There is a parameter on the cinder volumesnapshotclass called force-create with a default value of false . This force-create parameter on the volumesnapshotclass means cinder does not allow the volumesnapshot to be taken of a PVC in use. As a result, the volumesnapshot is in an error state. 1.14.2. Resolving the problem: Setting the parameter to true Create a new volumesnapshotclass for the cinder CSI driver. Change the paramater, force-create , to true . See the following sample YAML: apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: cinder.csi.openstack.org kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: 'true' name: standard-csi parameters: force-create: 'true' 1.15. Troubleshooting with the must-gather command 1.15.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can run the must-gather command for troubleshooting issues with multicluster global hub. 1.15.2. Resolving the problem: Running the must-gather command for dubugging Run the must-gather command to gather details, logs, and take steps in debugging issues. This debugging information is also useful when you open a support request. The oc adm must-gather CLI command collects the information from your cluster that is often needed for debugging issues, including: Resource definitions Service logs 1.15.2.1. Prerequisites You must meet the following prerequisites to run the must-gather command: Access to the global hub and managed hub clusters as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. 1.15.2.2. Running the must-gather command Complete the following procedure to collect information by using the must-gather command: Learn about the must-gather command and install the prerequisites that you need by reading the Gathering data about your cluster in the OpenShift Container Platform documentation. Log in to your global hub cluster. For the typical use case, run the following command while you are logged into your global hub cluster: If you want to check your managed hub clusters, run the must-gather command on those clusters. Optional: If you want to save the results in a the SOMENAME directory, you can run the following command instead of the one in the step: You can specify a different name for the directory. Note: The command includes the required additions to create a gzipped tarball file. The following information is collected from the must-gather command: Two peer levels: cluster-scoped-resources and namespaces resources. Sub-level for each: API group for the custom resource definitions for both cluster-scope and namespace-scoped resources. level for each: YAML file sorted by kind. For the global hub cluster, you can check the PostgresCluster and Kafka in the namespaces resources. For the global hub cluster, you can check the multicluster global hub related pods and logs in pods of namespaces resources. For the managed hub cluster, you can check the multicluster global hub agent pods and logs in pods of namespaces resources. 1.16. Troubleshooting by accessing the PostgreSQL database 1.16.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can access the provisioned PostgreSQL database to view messages that might be helpful for troubleshooting issues with multicluster global hub. 1.16.2. Resolving the problem: Accessing the PostgresSQL database Using the ClusterIP service LoadBalancer Expose the service type to LoadBalancer provisioned by default: Run the following command to get your the credentials: Expose the service type to LoadBalancer provisioned by crunchy operator: Run the following command to get your the credentials: 1.17. Troubleshooting by using the database dump and restore In a production environment, back up your PostgreSQL database regularly as a database management task. The backup can also be used for debugging the multicluster global hub. 1.17.1. Symptom: Errors with multicluster global hub You might experience various errors with multicluster global hub. You can use the database dump and restore for troubleshooting issues with multicluster global hub. 1.17.2. Resolving the problem: Dumping the output of the database for dubugging Sometimes you need to dump the output in the multicluster global hub database to debug a problem. The PostgreSQL database provides the pg_dump command line tool to dump the contents of the database. To dump data from localhost database server, run the following command: To dump the multicluster global hub database located on a remote server with compressed format, use the command-line options to control the connection details, as shown in the following example: 1.17.3. Resolving the problem: Restore database from dump To restore a PostgreSQL database, you can use the psql or pg_restore command line tools. The psql tool is used to restore plain text files created by pg_dump : The pg_restore tool is used to restore a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats (custom, tar, or directory): 1.18. Troubleshooting cluster status changing from offline to available The status of the managed cluster alternates between offline and available without any manual change to the environment or cluster. 1.18.1. Symptom: Cluster status changing from offline to available When the network that connects the managed cluster to the hub cluster is unstable, the status of the managed cluster that is reported by the hub cluster cycles between offline and available . The connection between the hub cluster and managed cluster is maintained through a lease that is validated at the leaseDurationSeconds interval value. If the lease is not validated within five consecutive attempts of the leaseDurationSeconds value, then the cluster is marked offline . For example, the cluster is marked offline after five minutes with a leaseDurationSeconds interval of 60 seconds . This configuration can be inadequate for reasons such as connectivity issues or latency, causing instability. 1.18.2. Resolving the problem: Cluster status changing from offline to available The five validation attempts is default and cannot be changed, but you can change the leaseDurationSeconds interval. Determine the amount of time, in minutes, that you want the cluster to be marked as offline , then multiply that value by 60 to convert to seconds. Then divide by the default five number of attempts. The result is your leaseDurationSeconds value. Edit your ManagedCluster specification on the hub cluster by entering the following command, but replace cluster-name with the name of your managed cluster: Increase the value of leaseDurationSeconds in your ManagedCluster specification, as seen in the following sample YAML: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60 Save and apply the file. 1.19. Troubleshooting cluster in console with pending or failed status If you observe Pending status or Failed status in the console for a cluster you created, follow the procedure to troubleshoot the problem. 1.19.1. Symptom: Cluster in console with pending or failed status After creating a new cluster by using the Red Hat Advanced Cluster Management for Kubernetes console, the cluster does not progress beyond the status of Pending or displays Failed status. 1.19.2. Identifying the problem: Cluster in console with pending or failed status If the cluster displays Failed status, navigate to the details page for the cluster and follow the link to the logs provided. If no logs are found or the cluster displays Pending status, continue with the following procedure to check for logs: Procedure 1 Run the following command on the hub cluster to view the names of the Kubernetes pods that were created in the namespace for the new cluster: Replace new_cluster_name with the name of the cluster that you created. If no pod that contains the string provision in the name is listed, continue with Procedure 2. If there is a pod with provision in the title, run the following command on the hub cluster to view the logs of that pod: Replace new_cluster_name_provision_pod_name with the name of the cluster that you created, followed by the pod name that contains provision . Search for errors in the logs that might explain the cause of the problem. Procedure 2 If there is not a pod with provision in its name, the problem occurred earlier in the process. Complete the following procedure to view the logs: Run the following command on the hub cluster: Replace new_cluster_name with the name of the cluster that you created. For more information about cluster installation logs, see Gathering installation logs in the Red Hat OpenShift documentation. See if there is additional information about the problem in the Status.Conditions.Message and Status.Conditions.Reason entries of the resource. 1.19.3. Resolving the problem: Cluster in console with pending or failed status After you identify the errors in the logs, determine how to resolve the errors before you destroy the cluster and create it again. The following example provides a possible log error of selecting an unsupported zone, and the actions that are required to resolve it: When you created your cluster, you selected one or more zones within a region that are not supported. Complete one of the following actions when you recreate your cluster to resolve the issue: Select a different zone within the region. Omit the zone that does not provide the support, if you have other zones listed. Select a different region for your cluster. After determining the issues from the log, destroy the cluster and recreate it. See Creating clusters for more information about creating a cluster. 1.20. Troubleshooting Grafana When you query some time-consuming metrics in the Grafana explorer, you might encounter a Gateway Time-out error. 1.20.1. Symptom: Grafana explorer gateway timeout If you hit the Gateway Time-out error when you query some time-consuming metrics in the Grafana explorer, it is possible that the timeout is caused by the Grafana in the open-cluster-management-observability namespace. 1.20.2. Resolving the problem: Configure the Grafana If you have this problem, complete the following steps: Verify that the default configuration of Grafana has expected timeout settings: To verify that the default timeout setting of Grafana, run the following command: The following timeout settings should be displayed: To verify the default data source query timeout for Grafana, run the following command: The following timeout settings should be displayed: If you verified the default configuration of Grafana has expected timeout settings, then you can configure the Grafana in the open-cluster-management-observability namespace by running the following command: Refresh the Grafana page and try to query the metrics again. The Gateway Time-out error is no longer displayed. 1.21. Troubleshooting local cluster not selected with placement rule The managed clusters are selected with a placement rule, but the local-cluster , which is a hub cluster that is also managed, is not selected. The placement rule user is not granted permission to get the managedcluster resources in the local-cluster namespace. 1.21.1. Symptom: Troubleshooting local cluster not selected as a managed cluster All managed clusters are selected with a placement rule, but the local-cluster is not. The placement rule user is not granted permission to get the managedcluster resources in the local-cluster namespace. 1.21.2. Resolving the problem: Troubleshooting local cluster not selected as a managed cluster Deprecated: PlacementRule To resolve this issue, you need to grant the managedcluster administrative permission in the local-cluster namespace. Complete the following steps: Confirm that the list of managed clusters does include local-cluster , and that the placement rule decisions list does not display the local-cluster . Run the following command and view the results: See in the sample output that local-cluster is joined, but it is not in the YAML for PlacementRule : apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-ready-clusters namespace: default spec: clusterSelector: {} status: decisions: - clusterName: cluster1 clusterNamespace: cluster1 Create a Role in your YAML file to grant the managedcluster administrative permission in the local-cluster namespace. See the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: managedcluster-admin-user-zisis namespace: local-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - get Create a RoleBinding resource to grant the placement rule user access to the local-cluster namespace. See the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: managedcluster-admin-user-zisis namespace: local-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: managedcluster-admin-user-zisis namespace: local-cluster subjects: - kind: User name: zisis apiGroup: rbac.authorization.k8s.io 1.22. Troubleshooting application Kubernetes deployment version A managed cluster with a deprecated Kubernetes apiVersion might not be supported. See the Kubernetes issue for more details about the deprecated API version. 1.22.1. Symptom: Application deployment version If one or more of your application resources in the Subscription YAML file uses the deprecated API, you might receive an error similar to the following error: Or with new Kubernetes API version in your YAML file named old.yaml for instance, you might receive the following error: 1.22.2. Resolving the problem: Application deployment version Update the apiVersion in the resource. For example, if the error displays for Deployment kind in the subscription YAML file, you need to update the apiVersion from extensions/v1beta1 to apps/v1 . See the following example: apiVersion: apps/v1 kind: Deployment Verify the available versions by running the following command on the managed cluster: Check for VERSION . 1.23. Troubleshooting Klusterlet with degraded conditions The Klusterlet degraded conditions can help to diagnose the status of Klusterlet agents on managed cluster. If a Klusterlet is in the degraded condition, the Klusterlet agents on managed cluster might have errors that need to be troubleshooted. See the following information for Klusterlet degraded conditions that are set to True . 1.23.1. Symptom: Klusterlet is in the degraded condition After deploying a Klusterlet on managed cluster, the KlusterletRegistrationDegraded or KlusterletWorkDegraded condition displays a status of True . 1.23.2. Identifying the problem: Klusterlet is in the degraded condition Run the following command on the managed cluster to view the Klusterlet status: Check KlusterletRegistrationDegraded or KlusterletWorkDegraded to see if the condition is set to True . Proceed to Resolving the problem for any degraded conditions that are listed. 1.23.3. Resolving the problem: Klusterlet is in the degraded condition See the following list of degraded statuses and how you can attempt to resolve those issues: If the KlusterletRegistrationDegraded condition with a status of True and the condition reason is: BootStrapSecretMissing , you need create a bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded condition displays True and the condition reason is a BootstrapSecretError , or BootstrapSecretUnauthorized , then the current bootstrap secret is invalid. Delete the current bootstrap secret and recreate a valid bootstrap secret on open-cluster-management-agent namespace. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is HubKubeConfigSecretMissing , delete the Klusterlet and recreate it. If the KlusterletRegistrationDegraded and KlusterletWorkDegraded displays True and the condition reason is: ClusterNameMissing , KubeConfigMissing , HubConfigSecretError , or HubConfigSecretUnauthorized , delete the hub cluster kubeconfig secret from open-cluster-management-agent namespace. The registration agent will bootstrap again to get a new hub cluster kubeconfig secret. If the KlusterletRegistrationDegraded displays True and the condition reason is GetRegistrationDeploymentFailed , or UnavailableRegistrationPod , you can check the condition message to get the problem details and attempt to resolve. If the KlusterletWorkDegraded displays True and the condition reason is GetWorkDeploymentFailed ,or UnavailableWorkPod , you can check the condition message to get the problem details and attempt to resolve. 1.24. Troubleshooting Object storage channel secret If you change the SecretAccessKey , the subscription of an Object storage channel cannot pick up the updated secret automatically and you receive an error. 1.24.1. Symptom: Object storage channel secret The subscription of an Object storage channel cannot pick up the updated secret automatically. This prevents the subscription operator from reconciliation and deploys resources from Object storage to the managed cluster. 1.24.2. Resolving the problem: Object storage channel secret You need to manually input the credentials to create a secret, then refer to the secret within a channel. Annotate the subscription CR in order to generate a reconcile single to subscription operator. See the following data specification: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: deva namespace: ch-obj labels: name: obj-sub spec: type: ObjectBucket pathname: http://ec2-100-26-232-156.compute-1.amazonaws.com:9000/deva sourceNamespaces: - default secretRef: name: dev --- apiVersion: v1 kind: Secret metadata: name: dev namespace: ch-obj labels: name: obj-sub data: AccessKeyID: YWRtaW4= SecretAccessKey: cGFzc3dvcmRhZG1pbg== Run oc annotate to test: After you run the command, you can go to the Application console to verify that the resource is deployed to the managed cluster. Or you can log in to the managed cluster to see if the application resource is created at the given namespace. 1.25. Troubleshooting observability After you install the observability component, the component might be stuck and an Installing status is displayed. 1.25.1. Symptom: MultiClusterObservability resource status stuck If the observability status is stuck in an Installing status after you install and create the Observability custom resource definition (CRD), it is possible that there is no value defined for the spec:storageConfig:storageClass parameter. Alternatively, the observability component automatically finds the default storageClass , but if there is no value for the storage, the component remains stuck with the Installing status. 1.25.2. Resolving the problem: MultiClusterObservability resource status stuck If you have this problem, complete the following steps: Verify that the observability components are installed: To verify that the multicluster-observability-operator , run the following command: To verify that the appropriate CRDs are present, run the following command: The following CRDs must be displayed before you enable the component: If you create your own storageClass for a Bare Metal cluster, see Persistent storage using NFS . To ensure that the observability component can find the default storageClass, update the storageClass parameter in the multicluster-observability-operator custom resource definition. Your parameter might resemble the following value: The observability component status is updated to a Ready status when the installation is complete. If the installation fails to complete, the Fail status is displayed. 1.26. Troubleshooting OpenShift monitoring service Observability service in a managed cluster needs to scrape metrics from the OpenShift Container Platform monitoring stack. The metrics-collector is not installed if the OpenShift Container Platform monitoring stack is not ready. 1.26.1. Symptom: OpenShift monitoring service is not ready The endpoint-observability-operator-x pod checks if the prometheus-k8s service is available in the openshift-monitoring namespace. If the service is not present in the openshift-monitoring namespace, then the metrics-collector is not deployed. You might receive the following error message: Failed to get prometheus resource . 1.26.2. Resolving the problem: OpenShift monitoring service is not ready If you have this problem, complete the following steps: Log in to your OpenShift Container Platform cluster. Access the openshift-monitoring namespace to verify that the prometheus-k8s service is available. Restart endpoint-observability-operator-x pod in the open-cluster-management-addon-observability namespace of the managed cluster. 1.27. Troubleshooting metrics-collector When the observability-client-ca-certificate secret is not refreshed in the managed cluster, you might receive an internal server error. 1.27.1. Symptom: metrics-collector cannot verify observability-client-ca-certificate There might be a managed cluster, where the metrics are unavailable. If this is the case, you might receive the following error from the metrics-collector deployment: 1.27.2. Resolving the problem: metrics-collector cannot verify observability-client-ca-certificate If you have this problem, complete the following steps: Log in to your managed cluster. Delete the secret named, observability-controller-open-cluster-management.io-observability-signer-client-cert that is in the open-cluster-management-addon-observability namespace. Run the following command: Note: The observability-controller-open-cluster-management.io-observability-signer-client-cert is automatically recreated with new certificates. The metrics-collector deployment is recreated and the observability-controller-open-cluster-management.io-observability-signer-client-cert secret is updated. 1.28. Troubleshooting PostgreSQL shared memory error If you have a large environment, you might encounter a PostgreSQL shared memory error that impacts search results and the topology view for applications. 1.28.1. Symptom: PostgreSQL shared memory error An error message resembling the following appears in the search-api logs: ERROR: could not resize shared memory segment "/PostgreSQL.1083654800" to 25031264 bytes: No space left on device (SQLSTATE 53100) 1.28.2. Resolving the problem: PostgreSQL shared memory error To resolve the issue, update the PostgreSQL resources found in the search-postgres ConfigMap. Complete the following steps to update the resources: Run the following command to switch to the open-cluster-management project: oc project open-cluster-management Increase the search-postgres pod memory. The following command increases the memory to 16Gi : oc patch search -n open-cluster-management search-v2-operator --type json -p '[{"op": "add", "path": "/spec/deployments/database/resources", "value": {"limits": {"memory": "16Gi"}, "requests": {"memory": "32Mi", "cpu": "25m"}}}]' Run the following command to prevent the search operator from overwriting your changes: oc annotate search search-v2-operator search-pause=true Run the following command to update the resources in the search-postgres YAML file: oc edit cm search-postgres -n open-cluster-management See the following example for increasing resources: postgresql.conf: |- work_mem = '128MB' # Higher values allocate more memory max_parallel_workers_per_gather = '0' # Disables parallel queries shared_buffers = '1GB' # Higher values allocate more memory Make sure to save your changes before exiting. Run the following command to restart the postgres and api pod. oc delete pod search-postgres-xyz search-api-xzy To verify your changes, open the search-postgres YAML file and confirm that the changes you made to postgresql.conf: are present by running the following command: oc get cm search-postgres -n open-cluster-management -o yaml See Search customization and configurations for more information on adding environment variables. 1.29. Troubleshooting Thanos compactor halts You might receive an error message that the compactor is halted. This can occur when there are corrupted blocks or when there is insufficient space on the Thanos compactor persistent volume claim (PVC). 1.29.1. Symptom: Thanos compactor halts The Thanos compactor halts because there is no space left on your persistent volume claim (PVC). You receive the following message: ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@5827190780573537664: compact blocks [ /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE]: 2 errors: populate block: add series: write series data: write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device; write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device" 1.29.2. Resolving the problem: Thanos compactor halts To resolve the problem, increase the storage space of the Thanos compactor PVC. Complete the following steps: Increase the storage space for the data-observability-thanos-compact-0 PVC. See Increasing and decreasing persistent volumes and persistent volume claims for more information. Restart the observability-thanos-compact pod by deleting the pod. The new pod is automatically created and started. oc delete pod observability-thanos-compact-0 -n open-cluster-management-observability After you restart the observability-thanos-compact pod, check the acm_thanos_compact_todo_compactions metric. As the Thanos compactor works through the backlog, the metric value decreases. Confirm that the metric changes in a consistent cycle and check the disk usage. Then you can reattempt to decrease the PVC again. Note: This might take several weeks. 1.29.3. Symptom: Thanos compactor halts The Thanos compactor halts because you have corrupted blocks. You might receive the following output where the 01HKZYEZ2DVDQXF1STVEXAMPLE block is corrupted: ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@15699422364132557315: compact blocks [/var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZQK7TD06J2XWGR5EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZYEZ2DVDQXF1STVEXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HM05APAHXBQSNC0N5EXAMPLE]: populate block: chunk iter: cannot populate chunk 8 from block 01HKZYEZ2DVDQXF1STVEXAMPLE: segment index 0 out of range" 1.29.4. Resolving the problem: Thanos compactor halts Add the thanos bucket verify command to the object storage configuration. Complete the following steps: Resolve the block error by adding the thanos bucket verify command to the object storage configuration. Set the configuration in the observability-thanos-compact pod by using the following commands: oc rsh observability-thanos-compact-0 [..] thanos tools bucket verify -r --objstore.config="USDOBJSTORE_CONFIG" --objstore-backup.config="USDOBJSTORE_CONFIG" --id=01HKZYEZ2DVDQXF1STVEXAMPLE If the command does not work, you must mark the block for deletion because it might be corrupted. Run the following commands: thanos tools bucket mark --id "01HKZYEZ2DVDQXF1STVEXAMPLE" --objstore.config="USDOBJSTORE_CONFIG" --marker=deletion-mark.json --details=DELETE If you are blocked for deletion, clean up the marked blocks by running the following command: thanos tools bucket cleanup --objstore.config="USDOBJSTORE_CONFIG" 1.30. Troubleshooting Submariner not connecting after installation If Submariner does not run correctly after you configure it, complete the following steps to diagnose the issue. 1.30.1. Symptom: Submariner not connecting after installation Your Submariner network is not communicating after installation. 1.30.2. Identifying the problem: Submariner not connecting after installation If the network connectivity is not established after deploying Submariner, begin the troubleshooting steps. Note that it might take several minutes for the processes to complete when you deploy Submariner. 1.30.3. Resolving the problem: Submariner not connecting after installation When Submariner does not run correctly after deployment, complete the following steps: Check for the following requirements to determine whether the components of Submariner deployed correctly: The submariner-addon pod is running in the open-cluster-management namespace of your hub cluster. The following pods are running in the submariner-operator namespace of each managed cluster: submariner-addon submariner-gateway submariner-routeagent submariner-operator submariner-globalnet (only if Globalnet is enabled in the ClusterSet) submariner-lighthouse-agent submariner-lighthouse-coredns submariner-networkplugin-syncer (only if the specified CNI value is OVNKubernetes ) submariner-metrics-proxy Run the subctl diagnose all command to check the status of the required pods, with the exception of the submariner-addon pods. Make sure to run the must-gather command to collect logs that can help with debugging issues. 1.31. Troubleshooting Submariner end-to-end test failures After running Submariner end-to-end tests, you might get failures. Use the following sections to help you troubleshoot these end-to-end test failures. 1.31.1. Symptom: Submariner end-to-end data plane test fails When the end-to-end data plane test fails, the Submariner tests show that the connector pod can connect to the listener pod, but later the connector pod gets stuck in the listening phase. 1.31.2. Resolving the problem: Submariner end-to-end data plane test fails The maximum transmission unit (MTU) can cause the end-to-end data plane test failure. For example, the MTU might cause the inter-cluster traffic over the Internet Protocol Security (IPsec) to fail. Verify if the MTU causes the failure by running an end-to-end data plane test that uses a small packet size. To run this type of test, run the following command in your Submariner workspace: subctl verify --verbose --only connectivity --context <from_context> --tocontext <to_context> --image-override submariner-nettest=quay.io/submariner/nettest:devel --packet-size 200 If the test succeeds with this small packet size, you can resolve the connection issues by setting the transmission control protocol (TCP) maximum segment size (MSS). Set the TCP MSS by completing the following steps: Set the TCP MSS clamping value by annotating the gateway node. For example, run the following command with a value of 1200 : oc annotate node <node_name> submariner.io/tcp-clamp-mss=1200 Restart all the RouteAgent pods by running the following command: oc delete pod -n submariner-operator -l app=submariner-routeagent 1.31.3. Symptom: Submariner end-to-end test fails for bare-metal clusters The end-to-end data plane tests might fail for the bare-metal cluster if the container network interface (CNI) is OpenShiftSDN, or if the virtual extensible local-area network (VXLAN) is used for the inter-cluster tunnels. 1.31.4. Resolving the problem: Submariner end-to-end test fails for bare-metal clusters A bug in the User Datagram Protocal (UDP) checksum calculation by the hardware can be the root cause for the end-to-end data plane test failures for bare-metal clusters. To troubleshoot this bug, disable the hardware offloading by applying the following YAML file: apiVersion: apps/v1 kind: DaemonSet metadata: name: disable-offload namespace: submariner-operator spec: selector: matchLabels: app: disable-offload template: metadata: labels: app: disable-offload spec: tolerations: - operator: Exists containers: - name: disable-offload image: nicolaka/netshoot imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: true capabilities: add: - net_admin drop: - all privileged: true readOnlyRootFilesystem: false runAsNonRoot: false command: ["/bin/sh", "-c"] args: - ethtool --offload vxlan-tunnel rx off tx off; ethtool --offload vx-submariner rx off tx off; sleep infinity restartPolicy: Always securityContext: {} serviceAccount: submariner-routeagent serviceAccountName: submariner-routeagent hostNetwork: true 1.32. Troubleshooting restore status finishes with errors After you restore a backup, resources are restored correctly but the Red Hat Advanced Cluster Management restore resource shows a FinishedWithErrors status. 1.32.1. Symptom: Troubleshooting restore status finishes with errors Red Hat Advanced Cluster Management shows a FinishedWithErrors status and one or more of the Velero restore resources created by the Red Hat Advanced Cluster Management restore show a PartiallyFailed status. 1.32.2. Resolving the problem: Troubleshooting restore status finishes with errors If you restore from a backup that is empty, you can safely ignore the FinishedWithErrors status. Red Hat Advanced Cluster Management for Kubernetes restore shows a cumulative status for all Velero restore resources. If one status is PartiallyFailed and the others are Completed , the cumulative status you see is PartiallyFailed to notify you that there is at least one issue. To resolve the issue, check the status for all individual Velero restore resources with a PartiallyFailed status and view the logs for more details. You can get the log from the object storage directly, or download it from the OADP Operator by using the DownloadRequest custom resource. To create a DownloadRequest from the console, complete the following steps: Navigate to Operators > Installed Operators > Create DownloadRequest . Select BackupLog as your Kind and follow the console instructions to complete the DownloadRequest creation. 1.33. Troubleshooting multiline YAML parsing When you want to use the fromSecret function to add contents of a Secret resource into a Route resource, the contents are displayed incorrectly. 1.33.1. Symptom: Troubleshooting multiline YAML parsing When the managed cluster and hub cluster are the same cluster the certificate data is redacted, so the contents are not parsed as a template JSON string. You might receive the following error messages: message: >- [spec.tls.caCertificate: Invalid value: "redacted ca certificate data": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates, spec.tls.certificate: Invalid value: "redacted certificate data": data does not contain any valid RSA or ECDSA certificates, spec.tls.key: Invalid value: "": no key specified] 1.33.2. Resolving the problem: Troubleshooting multiline YAML parsing Configure your certificate policy to retrieve the hub cluster and managed cluster fromSecret values. Use the autoindent function to update your certificate policy with the following content: tls: certificate: | {{ print "{{hub fromSecret "open-cluster-management" "minio-cert" "tls.crt" hub}}" | base64dec | autoindent }} 1.34. Troubleshooting ClusterCurator automatic template failure to deploy If you are using the ClusterCurator automatic template and it fails to deploy, follow the procedure to troubleshoot the problem. 1.34.1. Symptom: ClusterCurator automatic template failure to deploy You are unable to deploy managed clusters by using the ClusterCurator automatic template. The process might become stuck on the posthooks and might not create any logs. 1.34.2. Resolving the problem: ClusterCurator automatic template failure to deploy Complete the following steps to identify and resolve the problem: Check the ClusterCurator resource status in the cluster namespace for any messages or errors. In the Job resource named, curator-job-* , which is in the same cluster namespace as the step, check the pod log for any errors. Note: The job is removed after one hour due to a one hour time to live (TTL) setting. | [
"adm must-gather --image=registry.redhat.io/rhacm2/acm-must-gather-rhel9:v2.12 --dest-dir=<directory>",
"<your-directory>/cluster-scoped-resources/gather-managed.log>",
"REGISTRY=<internal.repo.address:port> IMAGE1=USDREGISTRY/rhacm2/acm-must-gather-rhel9:v<2.x> adm must-gather --image=USDIMAGE1 --dest-dir=<directory>",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME",
"REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel8@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8 adm must-gather --image=USDIMAGE /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=./data",
"reason: Unschedulable message: '0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.'",
"Error from server: request to convert CR from an invalid group/version: cluster.open-cluster-management.io/v1beta1",
"annotate mce multiclusterengine pause=true",
"patch deployment cluster-manager -n multicluster-engine -p \\ '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"registration-operator\",\"image\":\"registry.redhat.io/multicluster-engine/registration-operator-rhel8@sha256:35999c3a1022d908b6fe30aa9b85878e666392dbbd685e9f3edcb83e3336d19f\"}]}}}}' export ORIGIN_REGISTRATION_IMAGE=USD(oc get clustermanager cluster-manager -o jsonpath='{.spec.registrationImagePullSpec}')",
"patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"registry.redhat.io/multicluster-engine/registration-rhel8@sha256:a3c22aa4326859d75986bf24322068f0aff2103cccc06e1001faaf79b9390515\"}]'",
"annotate crds managedclustersets.cluster.open-cluster-management.io operator.open-cluster-management.io/version- annotate crds managedclustersetbindings.cluster.open-cluster-management.io operator.open-cluster-management.io/version-",
"-n multicluster-engine delete pods -l app=cluster-manager wait crds managedclustersets.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s wait crds managedclustersetbindings.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s",
"patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]'",
"wait storageversionmigration managedclustersets.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s wait storageversionmigration managedclustersetbindings.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s",
"annotate mce multiclusterengine pause- patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"'USDORIGIN_REGISTRATION_IMAGE'\"}]'",
"get managedclusterset get managedclustersetbinding -A",
"-n multicluster-engine get pods -l app=managedcluster-import-controller-v2",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1",
"-n <managed_cluster_name> get secrets <managed_cluster_name>-import",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1 | grep importconfig-controller",
"get managedcluster <managed_cluster_name> -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}' | grep ManagedClusterImportSucceeded",
"get pod -n open-cluster-management-agent | grep klusterlet-registration-agent",
"logs <registration_agent_pod> -n open-cluster-management-agent",
"get infrastructure cluster -o yaml | grep apiServerURL",
"error log: Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition Error from server (AlreadyExists): error when creating \"STDIN\": customresourcedefinitions.apiextensions.k8s.io \"klusterlets.operator.open-cluster-management.io\" already exists The cluster cannot be imported because its Klusterlet CRD already exists. Either the cluster was already imported, or it was not detached completely during a previous detach process. Detach the existing cluster before trying the import again.\"",
"get all -n open-cluster-management-agent get all -n open-cluster-management-agent-addon",
"get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"delete namespaces open-cluster-management-agent open-cluster-management-agent-addon --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc delete crds --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc patch crds --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"time=\"2020-08-07T15:27:55Z\" level=error msg=\"Error: error setting up new vSphere SOAP client: Post https://147.1.1.1/sdk: x509: cannot validate certificate for xx.xx.xx.xx because it doesn't contain any IP SANs\" time=\"2020-08-07T15:27:55Z\" level=error",
"Error: error setting up new vSphere SOAP client: Post https://vspherehost.com/sdk: x509: certificate signed by unknown authority\"",
"x509: certificate has expired or is not yet valid",
"time=\"2020-08-07T19:41:58Z\" level=debug msg=\"vsphere_tag_category.category: Creating...\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\"Error: could not create category: POST https://vspherehost.com/rest/com/vmware/cis/tagging/category: 403 Forbidden\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\" on ../tmp/openshift-install-436877649/main.tf line 54, in resource \\\"vsphere_tag_category\\\" \\\"category\\\":\" time=\"2020-08-07T19:41:58Z\" level=error msg=\" 54: resource \\\"vsphere_tag_category\\\" \\\"category\\\" {\"",
"failed to fetch Master Machines: failed to load asset \\\\\\\"Install Config\\\\\\\": invalid \\\\\\\"install-config.yaml\\\\\\\" file: platform.vsphere.dnsVIP: Invalid value: \\\\\\\"\\\\\\\": \\\\\\\"\\\\\\\" is not a valid IP",
"time=\"2020-08-11T14:31:38-04:00\" level=debug msg=\"vsphereprivate_import_ova.import: Creating...\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error msg=\"Error: rpc error: code = Unavailable desc = transport is closing\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=fatal msg=\"failed to fetch Cluster: failed to generate asset \\\"Cluster\\\": failed to create cluster: failed to apply Terraform: failed to complete the change\"",
"ERROR ERROR Error: error reconfiguring virtual machine: error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-71:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-71), ACTION (PolicyIDByVirtualDisk)",
"clouds: openstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt\"",
"spec: baseDomain: dev09.red-chesterfield.com clusterName: txue-osspoke platform: openstack: cloud: openstack credentialsSecretRef: name: txue-osspoke-openstack-creds certificatesSecretRef: name: txue-osspoke-openstack-certificatebundle",
"create secret generic txue-osspoke-openstack-certificatebundle --from-file=ca.crt=ca.crt.pem -n USDCLUSTERNAME",
"E0917 03:04:05.874759 1 manifestwork_controller.go:179] Reconcile work test-1-klusterlet-addon-workmgr fails with err: Failed to update work status with err Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:05.874887 1 base_controller.go:231] \"ManifestWorkAgent\" controller failed to sync \"test-1-klusterlet-addon-workmgr\", err: Failed to update work status with err Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:37.245859 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManifestWork: failed to list *v1.ManifestWork: Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks?resourceVersion=607424\": x509: certificate signed by unknown authority",
"I0917 02:27:41.525026 1 event.go:282] Event(v1.ObjectReference{Kind:\"Namespace\", Namespace:\"open-cluster-management-agent\", Name:\"open-cluster-management-agent\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'ManagedClusterAvailableConditionUpdated' update managed cluster \"test-1\" available condition to \"True\", due to \"Managed cluster is available\" E0917 02:58:26.315984 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.CertificateSigningRequest: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority E0917 02:58:26.598343 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\": x509: certificate signed by unknown authority E0917 02:58:27.613963 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: failed to list *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority",
"delete secret -n <cluster_name> <cluster_name>-import",
"delete secret -n <cluster_name> <cluster_name>-import",
"get secret -n <cluster_name> <cluster_name>-import -ojsonpath='{.data.import\\.yaml}' | base64 --decode > import.yaml",
"apply -f import.yaml",
"api-resources --verbs=list --namespaced -o name | grep -E '^secrets|^serviceaccounts|^managedclusteraddons|^roles|^rolebindings|^manifestworks|^leases|^managedclusterinfo|^appliedmanifestworks'|^clusteroauths' | xargs -n 1 oc get --show-kind --ignore-not-found -n <cluster_name>",
"edit <resource_kind> <resource_name> -n <namespace>",
"delete ns <cluster-name>",
"delete secret auto-import-secret -n <cluster-namespace>",
"apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: cinder.csi.openstack.org kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: 'true' name: standard-csi parameters: force-create: 'true'",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-dir=<SOMENAME> ; tar -cvzf <SOMENAME>.tgz <SOMENAME>",
"There are two ways to access the provisioned PostgreSQL database.",
"exec -it multicluster-global-hub-postgres-0 -c multicluster-global-hub-postgres -n multicluster-global-hub -- psql -U postgres -d hoh Or access the database installed by crunchy operator exec -it USD(kubectl get pods -n multicluster-global-hub -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -c database -n multicluster-global-hub -- psql -U postgres -d hoh -c \"SELECT 1\"",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: multicluster-global-hub-postgres-lb namespace: multicluster-global-hub spec: ports: - name: postgres port: 5432 protocol: TCP targetPort: 5432 selector: name: multicluster-global-hub-postgres type: LoadBalancer EOF",
"Host get svc postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}'",
"patch postgrescluster postgres -n multicluster-global-hub -p '{\"spec\":{\"service\":{\"type\":\"LoadBalancer\"}}}' --type merge",
"Host get svc -n multicluster-global-hub postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Username get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"user\" | base64decode}}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}' Database get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"dbname\" | base64decode}}'",
"pg_dump hoh > hoh.sql",
"pg_dump -h my.host.com -p 5432 -U postgres -F t hoh -f hoh-USD(date +%d-%m-%y_%H-%M).tar",
"psql -h another.host.com -p 5432 -U postgres -d hoh < hoh.sql",
"pg_restore -h another.host.com -p 5432 -U postgres -d hoh hoh-USD(date +%d-%m-%y_%H-%M).tar",
"edit managedcluster <cluster-name>",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60",
"get pod -n <new_cluster_name>",
"logs <new_cluster_name_provision_pod_name> -n <new_cluster_name> -c hive",
"describe clusterdeployments -n <new_cluster_name>",
"No subnets provided for zones",
"get secret grafana-config -n open-cluster-management-observability -o jsonpath=\"{.data.grafana\\.ini}\" | base64 -d | grep dataproxy -A 4",
"[dataproxy] timeout = 300 dial_timeout = 30 keep_alive_seconds = 300",
"get secret/grafana-datasources -n open-cluster-management-observability -o jsonpath=\"{.data.datasources\\.yaml}\" | base64 -d | grep queryTimeout",
"queryTimeout: 300s",
"annotate route grafana -n open-cluster-management-observability --overwrite haproxy.router.openshift.io/timeout=300s",
"% oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true True True 56d cluster1 true True True 16h",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-ready-clusters namespace: default spec: clusterSelector: {} status: decisions: - clusterName: cluster1 clusterNamespace: cluster1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: managedcluster-admin-user-zisis namespace: local-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: managedcluster-admin-user-zisis namespace: local-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: managedcluster-admin-user-zisis namespace: local-cluster subjects: - kind: User name: zisis apiGroup: rbac.authorization.k8s.io",
"failed to install release: unable to build kubernetes objects from release manifest: unable to recognize \"\": no matches for kind \"Deployment\" in version \"extensions/v1beta1\"",
"error: unable to recognize \"old.yaml\": no matches for kind \"Deployment\" in version \"deployment/v1beta1\"",
"apiVersion: apps/v1 kind: Deployment",
"explain <resource>",
"get klusterlets klusterlet -oyaml",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: deva namespace: ch-obj labels: name: obj-sub spec: type: ObjectBucket pathname: http://ec2-100-26-232-156.compute-1.amazonaws.com:9000/deva sourceNamespaces: - default secretRef: name: dev --- apiVersion: v1 kind: Secret metadata: name: dev namespace: ch-obj labels: name: obj-sub data: AccessKeyID: YWRtaW4= SecretAccessKey: cGFzc3dvcmRhZG1pbg==",
"annotate appsub -n <subscription-namespace> <subscription-name> test=true",
"get pods -n open-cluster-management|grep observability",
"get crd|grep observ",
"multiclusterobservabilities.observability.open-cluster-management.io observabilityaddons.observability.open-cluster-management.io observatoria.core.observatorium.io",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"error: response status code is 500 Internal Server Error, response body is x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"observability-client-ca-certificate\")",
"delete secret observability-controller-open-cluster-management.io-observability-signer-client-cert -n open-cluster-management-addon-observability",
"project open-cluster-management",
"patch search -n open-cluster-management search-v2-operator --type json -p '[{\"op\": \"add\", \"path\": \"/spec/deployments/database/resources\", \"value\": {\"limits\": {\"memory\": \"16Gi\"}, \"requests\": {\"memory\": \"32Mi\", \"cpu\": \"25m\"}}}]'",
"annotate search search-v2-operator search-pause=true",
"edit cm search-postgres -n open-cluster-management",
"postgresql.conf: |- work_mem = '128MB' # Higher values allocate more memory max_parallel_workers_per_gather = '0' # Disables parallel queries shared_buffers = '1GB' # Higher values allocate more memory",
"delete pod search-postgres-xyz search-api-xzy",
"get cm search-postgres -n open-cluster-management -o yaml",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@5827190780573537664: compact blocks [ /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE]: 2 errors: populate block: add series: write series data: write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device; write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device\"",
"delete pod observability-thanos-compact-0 -n open-cluster-management-observability",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@15699422364132557315: compact blocks [/var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZQK7TD06J2XWGR5EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZYEZ2DVDQXF1STVEXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HM05APAHXBQSNC0N5EXAMPLE]: populate block: chunk iter: cannot populate chunk 8 from block 01HKZYEZ2DVDQXF1STVEXAMPLE: segment index 0 out of range\"",
"rsh observability-thanos-compact-0 [..] thanos tools bucket verify -r --objstore.config=\"USDOBJSTORE_CONFIG\" --objstore-backup.config=\"USDOBJSTORE_CONFIG\" --id=01HKZYEZ2DVDQXF1STVEXAMPLE",
"thanos tools bucket mark --id \"01HKZYEZ2DVDQXF1STVEXAMPLE\" --objstore.config=\"USDOBJSTORE_CONFIG\" --marker=deletion-mark.json --details=DELETE",
"thanos tools bucket cleanup --objstore.config=\"USDOBJSTORE_CONFIG\"",
"subctl verify --verbose --only connectivity --context <from_context> --tocontext <to_context> --image-override submariner-nettest=quay.io/submariner/nettest:devel --packet-size 200",
"annotate node <node_name> submariner.io/tcp-clamp-mss=1200",
"delete pod -n submariner-operator -l app=submariner-routeagent",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: disable-offload namespace: submariner-operator spec: selector: matchLabels: app: disable-offload template: metadata: labels: app: disable-offload spec: tolerations: - operator: Exists containers: - name: disable-offload image: nicolaka/netshoot imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: true capabilities: add: - net_admin drop: - all privileged: true readOnlyRootFilesystem: false runAsNonRoot: false command: [\"/bin/sh\", \"-c\"] args: - ethtool --offload vxlan-tunnel rx off tx off; ethtool --offload vx-submariner rx off tx off; sleep infinity restartPolicy: Always securityContext: {} serviceAccount: submariner-routeagent serviceAccountName: submariner-routeagent hostNetwork: true",
"message: >- [spec.tls.caCertificate: Invalid value: \"redacted ca certificate data\": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates, spec.tls.certificate: Invalid value: \"redacted certificate data\": data does not contain any valid RSA or ECDSA certificates, spec.tls.key: Invalid value: \"\": no key specified]",
"tls: certificate: | {{ print \"{{hub fromSecret \"open-cluster-management\" \"minio-cert\" \"tls.crt\" hub}}\" | base64dec | autoindent }}"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/troubleshooting/troubleshooting |
Chapter 3. Enhancements | Chapter 3. Enhancements This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.16. 3.1. New elements for bucket policies OpenShift Data Foundation now has the bucket policy elements, NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 3.2. Addition of a new AWS region to the Multicloud Object Gateway Operator A new AWS region, ca-west-1 , is added to the supported regions of the Multicloud Object Gateway (MCG) operator for the creation of default backingstore. 3.3. Increase in resource allocation for OpenShift Data Foundation Multicloud Object Gateway BackingStore The default resources for PV pool CPU and memory are increased to 999m and 1Gi respectively to enable more resource allocation for OpenShift Data Foundation MCG BackingStore. 3.4. Multicloud Object Gateway created routes to work with HTTPS only For deployments that need to disable HTTP and use only HTTPS, an option is added to set DenyHTTP to the storage cluster CR spec.multiCloudGateway.denyHTTP . This causes the Multicloud Object Gateway created routes to use HTTPS only. 3.5. Addition of protected condition to DR protected workloads with metrics and alerts for monitoring Protected condition to DR protected workloads is added by summarizing various conditions regarding the DR protected workload from the ManagedCluster , and metrics and alerts are generated based on the same. DR protected workload health at the hub is reflected based only on the time when contents of the respective PVCs are synced. This applies only to RegionalDR use cases and not to MetroDR use cases. On the ManagedCluster , the workload DR protection health is expanded into several conditions. This makes it non-trivial for a user to monitor the workload DR health across these conditions. This added protected condition and alerts helps better workload DR protection monitoring. 3.6. Support for listing multiple uploads in NamespaceStore Filesystem You can now list the files that are still uploading or list incomplete multipart uploads in NamespaceStore Filesystem by using the following command: 3.7. Option to modify thresholds for Ceph full , nearfull , and backfillfull attributes Depending on the cluster requirements, the full , nearfull , and backfillfull threshold values can be updated by using the odf-cli CLI command. For example: odf set full <val> odf set nearful <val> odf set backfillfull <val> Note The value must be in the range of 0.0 to 1.0 and you need to ensure that the value is not very close to 1.0. | [
"s3api list--multipart-uploads --bucket <bucket_name>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/enhancements |
Chapter 8. Documentation | Chapter 8. Documentation Supported configurations Check the latest information about 3scale 2.15 supported configurations at the Red Hat 3scale API Management Supported Configurations website. Security updates Check the latest information about 3scale 2.15 security updates in the Red Hat Product Advisories portal. Erratas Advisory for the Container Images: RHEA-2023:112722 Upgrade guides Check the procedures to upgrade your 3scale installation from 2.13 to 2.14, for the following deployments: Based on operators APIcast in operator-based deployments | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/documentation |
Chapter 1. LLVM Toolset | Chapter 1. LLVM Toolset LLVM Toolset is a Red Hat offering for developers on Red Hat Enterprise Linux (RHEL). It provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis. For Red Hat Enterprise Linux 8, LLVM Toolset is available as a module. LLVM Toolset is available as packages for Red Hat Enterprise Linux 9. 1.1. LLVM Toolset components The following components are available as a part of LLVM Toolset: Name Version Description clang 16.0.6 An LLVM compiler front end for C and C++. lldb 16.0.6 A C and C++ debugger using portions of LLVM. compiler-rt 16.0.6 Runtime libraries for LLVM and Clang. llvm 16.0.6 A collection of modular and reusable compiler and toolchain technologies. libomp 16.0.6 A library for using Open MP API specification for parallel programming. lld 16.0.6 An LLVM linker. python-lit 16.0.6 A software testing tool for LLVM- and Clang-based test suites. Note The CMake build manager is not part of LLVM Toolset. On Red Hat Enterprise Linux 8, CMake is available in the system repository. On Red Hat Enterprise Linux 9, CMake is available in the system repository. For more information on how to install CMake, see Installing CMake on Red Hat Enterprise Linux . 1.2. LLVM Toolset compatibility LLVM Toolset is available for Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 on the following architectures: AMD and Intel 64-bit 64-bit ARM IBM Power Systems, Little Endian 64-bit IBM Z 1.3. Installing LLVM Toolset Complete the following steps to install LLVM Toolset including all development and debugging tools as well as dependent packages. Prerequisites All available Red Hat Enterprise Linux updates are installed. Procedure On Red Hat Enterprise Linux 8, install the llvm-toolset module by running: Important This does not install the LLDB debugger or the python3-lit package on Red Hat Enterprise Linux 8. To install the LLDB debugger and the python3-lit package, run: On Red Hat Enterprise Linux 9, install the llvm-toolset package by running: Important This does not install the LLDB debugger or the python3-lit package on Red Hat Enterprise Linux 9. To install the LLDB debugger and the python3-lit package, run: 1.4. Installing the CMake build manager The CMake build manager is a tool that manages the build process of your source code independently from your compiler. CMake can generate a native build environment to compile source code, create libraries, generate wrappers, and build executable files. Complete the following steps to install the CMake build manager. Prerequisites LLVM Toolset is installed. For more information, see Installing LLVM Toolset . Procedure To install CMake, run the following command: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on the CMake build manager, see the official CMake documentation overview About CMake . For an introduction to using the CMake build manager, see: The CMake Reference Documentation Introduction . The official CMake documentation CMake Tutorial . 1.5. Installing LLVM Toolset documentation You can install documentation for LLVM Toolset on your local system. Prerequisites LLVM Toolset is installed. For more information, see Installing LLVM Toolset . Procedure To install the llvm-doc package, run the following command: On Red Hat Enterprise Linux 8: You can find the documentation under the following path: /usr/share/doc/llvm/html/index.html . On Red Hat Enterprise Linux 9: You can find the documentation under the following path: /usr/share/doc/llvm/html/index.html . 1.6. Installing CMake documentation You can install documentation for the CMake build manager on your local system. Prerequisites CMake is installed. For more information, see Installing the CMake build manager . Procedure To install the cmake-doc package, run the following command: On Red Hat Enterprise Linux 8: You can find the documentation under the following path: /usr/share/doc/cmake/html/index.html . On Red Hat Enterprise Linux 9: You can find the documentation under the following path: /usr/share/doc/cmake/html/index.html . 1.7. Additional resources For more information on LLVM Toolset, see the official LLVM documentation . | [
"yum module install llvm-toolset",
"yum install lldb python3-lit",
"dnf install llvm-toolset",
"dnf install lldb python3-lit",
"yum install cmake",
"dnf install cmake",
"yum install llvm-doc",
"dnf install llvm-doc",
"yum install cmake-doc",
"dnf install cmake-doc"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_16.0.6_toolset/assembly_llvm_using-llvm-toolset |
Chapter 14. Pod [v1] | Chapter 14. Pod [v1] Description Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. Type object 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. status object PodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane. 14.1.1. .spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 14.1.2. .spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 14.1.3. .spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 14.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 14.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 14.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 14.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 14.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 14.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 14.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.18. .spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.22. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.23. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.24. .spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.25. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.26. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.27. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.28. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.29. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.30. .spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 14.1.31. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.32. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.33. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.34. .spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.35. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.36. .spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.37. .spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.38. .spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.39. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.40. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.41. .spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.42. .spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.43. .spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.44. .spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.45. .spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.46. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.47. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.48. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.49. .spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.50. .spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.51. .spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.52. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.53. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.54. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.55. .spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.56. .spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.57. .spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.58. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.59. .spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.60. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.61. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.62. .spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.63. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 14.1.64. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.65. .spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.66. .spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.67. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.68. .spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.69. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.70. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.71. .spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.72. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.73. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.74. .spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.75. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.76. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.77. .spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.78. .spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.79. .spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.80. .spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.81. .spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.82. .spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.83. .spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.84. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.85. .spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.86. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.87. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.88. .spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.89. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.90. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.91. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 14.1.92. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.93. .spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 14.1.94. .spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 14.1.95. .spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 14.1.96. .spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 14.1.97. .spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.98. .spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.99. .spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.100. .spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.101. .spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.102. .spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.103. .spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.104. .spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.105. .spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.106. .spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.107. .spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.108. .spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.109. .spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.110. .spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.111. .spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.112. .spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.113. .spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.114. .spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.115. .spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.116. .spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.117. .spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.118. .spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.119. .spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.120. .spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.121. .spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.122. .spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.123. .spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.124. .spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.125. .spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.126. .spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.127. .spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.128. .spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.129. .spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 14.1.130. .spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.131. .spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.132. .spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.133. .spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.134. .spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.135. .spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.136. .spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.137. .spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.138. .spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.139. .spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.140. .spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.141. .spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.142. .spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.143. .spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.144. .spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.145. .spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.146. .spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.147. .spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.148. .spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.149. .spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.150. .spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.151. .spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.152. .spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.153. .spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.154. .spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.155. .spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.156. .spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.157. .spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 14.1.158. .spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.159. .spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 14.1.160. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 14.1.161. .spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 14.1.162. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.163. .spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 14.1.164. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.165. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.166. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.167. .spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.168. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.169. .spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.170. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.171. .spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.172. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.173. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.174. .spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.175. .spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.176. .spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.177. .spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.178. .spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.179. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.180. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.181. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.182. .spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.183. .spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.184. .spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.185. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.186. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.187. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.188. .spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.189. .spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.190. .spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.191. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.192. .spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.193. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.194. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.195. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.196. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 14.1.197. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.198. .spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.199. .spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.200. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.201. .spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.202. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.203. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.204. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.205. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.206. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.207. .spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.208. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.209. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.210. .spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.211. .spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.212. .spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.213. .spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.214. .spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.215. .spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.216. .spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.217. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.218. .spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.219. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.220. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.221. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.222. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.223. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.224. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 14.1.225. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.226. .spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 14.1.227. .spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 14.1.228. .spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 14.1.229. .spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 14.1.230. .spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. 14.1.231. .spec.resourceClaims[].source Description ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 14.1.232. .spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 14.1.233. .spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 14.1.234. .spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Always" indicates that volume's ownership and permissions should always be changed whenever volume is mounted inside a Pod. This the default behavior. - "OnRootMismatch" indicates that volume's ownership and permissions will be changed only when permission and ownership of root directory does not match with expected permissions on the volume. This can help shorten the time it takes to change ownership and permissions of a volume. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.235. .spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.236. .spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.237. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 14.1.238. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 14.1.239. .spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.240. .spec.tolerations Description If specified, the pod's tolerations. Type array 14.1.241. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 14.1.242. .spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 14.1.243. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 14.1.244. .spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 14.1.245. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 14.1.246. .spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 14.1.247. .spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 14.1.248. .spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 14.1.249. .spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 14.1.250. .spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.251. .spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 14.1.252. .spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.253. .spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 14.1.254. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.255. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.256. .spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 14.1.257. .spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.258. .spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 14.1.259. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 14.1.260. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 14.1.261. .spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.262. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.263. .spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 14.1.264. .spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 14.1.265. .spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 14.1.266. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 14.1.267. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 14.1.268. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 14.1.269. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.270. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.271. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.272. .spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 14.1.273. .spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 14.1.274. .spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.275. .spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 14.1.276. .spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 14.1.277. .spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 14.1.278. .spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 14.1.279. .spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 14.1.280. .spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 14.1.281. .spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.282. .spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 14.1.283. .spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 14.1.284. .spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 14.1.285. .spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 14.1.286. .spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 14.1.287. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 14.1.288. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 14.1.289. .spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 14.1.290. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.291. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.292. .spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 14.1.293. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 14.1.294. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 14.1.295. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.296. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.297. .spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 14.1.298. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.299. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.300. .spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 14.1.301. .spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 14.1.302. .spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 14.1.303. .spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.304. .spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 14.1.305. .spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.306. .spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 14.1.307. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.308. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.309. .spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 14.1.310. .spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.311. .spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 14.1.312. .status Description PodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane. Type object Property Type Description conditions array Current service state of pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions conditions[] object PodCondition contains details for the current condition of this pod. containerStatuses array The list has one entry per container in the manifest. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status containerStatuses[] object ContainerStatus contains details for the current status of this container. ephemeralContainerStatuses array Status for any ephemeral containers that have run in this pod. ephemeralContainerStatuses[] object ContainerStatus contains details for the current status of this container. hostIP string IP address of the host to which the pod is assigned. Empty if not yet scheduled. initContainerStatuses array The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status initContainerStatuses[] object ContainerStatus contains details for the current status of this container. message string A human readable message indicating details about why the pod is in this condition. nominatedNodeName string nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled. phase string The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values: Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase Possible enum values: - "Failed" means that all containers in the pod have terminated, and at least one container has terminated in a failure (exited with a non-zero exit code or was stopped by the system). - "Pending" means the pod has been accepted by the system, but one or more of the containers has not been started. This includes time before being bound to a node, as well as time spent pulling images onto the host. - "Running" means the pod has been bound to a node and all of the containers have been started. At least one container is still running or is in the process of being restarted. - "Succeeded" means that all containers in the pod have voluntarily terminated with a container exit code of 0, and the system is not going to restart any of these containers. - "Unknown" means that for some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod. Deprecated: It isn't being set since 2015 (74da3b14b0c0f658b3bb8d2def5094686d0e9095) podIP string IP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated. podIPs array podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet. podIPs[] object IP address information for entries in the (plural) PodIPs field. Each entry includes: IP: An IP address allocated to the pod. Routable at least within the cluster. qosClass string The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes Possible enum values: - "BestEffort" is the BestEffort qos class. - "Burstable" is the Burstable qos class. - "Guaranteed" is the Guaranteed qos class. reason string A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted' resize string Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" startTime Time RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod. 14.1.313. .status.conditions Description Current service state of pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions Type array 14.1.314. .status.conditions[] Description PodCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions type string Type is the type of the condition. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions 14.1.315. .status.containerStatuses Description The list has one entry per container in the manifest. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status Type array 14.1.316. .status.containerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.317. .status.containerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.318. .status.containerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.319. .status.containerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.320. .status.containerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.321. .status.containerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.322. .status.containerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.323. .status.containerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.324. .status.containerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.325. .status.containerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.326. .status.containerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.327. .status.containerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.328. .status.ephemeralContainerStatuses Description Status for any ephemeral containers that have run in this pod. Type array 14.1.329. .status.ephemeralContainerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.330. .status.ephemeralContainerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.331. .status.ephemeralContainerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.332. .status.ephemeralContainerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.333. .status.ephemeralContainerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.334. .status.ephemeralContainerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.335. .status.ephemeralContainerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.336. .status.ephemeralContainerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.337. .status.ephemeralContainerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.338. .status.ephemeralContainerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.339. .status.ephemeralContainerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.340. .status.ephemeralContainerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.341. .status.initContainerStatuses Description The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status Type array 14.1.342. .status.initContainerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.343. .status.initContainerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.344. .status.initContainerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.345. .status.initContainerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.346. .status.initContainerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.347. .status.initContainerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.348. .status.initContainerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.349. .status.initContainerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.350. .status.initContainerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.351. .status.initContainerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.352. .status.initContainerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.353. .status.initContainerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.354. .status.podIPs Description podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet. Type array 14.1.355. .status.podIPs[] Description IP address information for entries in the (plural) PodIPs field. Each entry includes: Type object Property Type Description ip string ip is an IP address (IPv4 or IPv6) assigned to the pod 14.2. API endpoints The following API endpoints are available: /api/v1/pods GET : list or watch objects of kind Pod /api/v1/watch/pods GET : watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/pods DELETE : delete collection of Pod GET : list or watch objects of kind Pod POST : create a Pod /api/v1/watch/namespaces/{namespace}/pods GET : watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/pods/{name} DELETE : delete a Pod GET : read the specified Pod PATCH : partially update the specified Pod PUT : replace the specified Pod /api/v1/namespaces/{namespace}/pods/{name}/log GET : read log of the specified Pod /api/v1/watch/namespaces/{namespace}/pods/{name} GET : watch changes to an object of kind Pod. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/pods/{name}/status GET : read status of the specified Pod PATCH : partially update status of the specified Pod PUT : replace status of the specified Pod /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers GET : read ephemeralcontainers of the specified Pod PATCH : partially update ephemeralcontainers of the specified Pod PUT : replace ephemeralcontainers of the specified Pod 14.2.1. /api/v1/pods Table 14.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Pod Table 14.2. HTTP responses HTTP code Reponse body 200 - OK PodList schema 401 - Unauthorized Empty 14.2.2. /api/v1/watch/pods Table 14.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. Table 14.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.3. /api/v1/namespaces/{namespace}/pods Table 14.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 14.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Pod Table 14.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 14.8. Body parameters Parameter Type Description body DeleteOptions schema Table 14.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Pod Table 14.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.11. HTTP responses HTTP code Reponse body 200 - OK PodList schema 401 - Unauthorized Empty HTTP method POST Description create a Pod Table 14.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.13. Body parameters Parameter Type Description body Pod schema Table 14.14. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 202 - Accepted Pod schema 401 - Unauthorized Empty 14.2.4. /api/v1/watch/namespaces/{namespace}/pods Table 14.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 14.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. Table 14.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.5. /api/v1/namespaces/{namespace}/pods/{name} Table 14.18. Global path parameters Parameter Type Description name string name of the Pod namespace string object name and auth scope, such as for teams and projects Table 14.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Pod Table 14.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 14.21. Body parameters Parameter Type Description body DeleteOptions schema Table 14.22. HTTP responses HTTP code Reponse body 200 - OK Pod schema 202 - Accepted Pod schema 401 - Unauthorized Empty HTTP method GET Description read the specified Pod Table 14.23. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Pod Table 14.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 14.25. Body parameters Parameter Type Description body Patch schema Table 14.26. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Pod Table 14.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.28. Body parameters Parameter Type Description body Pod schema Table 14.29. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty 14.2.6. /api/v1/namespaces/{namespace}/pods/{name}/log Table 14.30. Global path parameters Parameter Type Description name string name of the Pod namespace string object name and auth scope, such as for teams and projects Table 14.31. Global query parameters Parameter Type Description container string The container for which to stream logs. Defaults to only container if there is one container in the pod. follow boolean Follow the log stream of the pod. Defaults to false. insecureSkipTLSVerifyBackend boolean insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the serving certificate of the backend it is connecting to. This will make the HTTPS connection between the apiserver and the backend insecure. This means the apiserver cannot verify the log data it is receiving came from the real kubelet. If the kubelet is configured to verify the apiserver's TLS credentials, it does not mean the connection to the real kubelet is vulnerable to a man in the middle attack (e.g. an attacker could not intercept the actual log data coming from the real kubelet). limitBytes integer If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit. pretty string If 'true', then the output is pretty printed. boolean Return terminated container logs. Defaults to false. sinceSeconds integer A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified. tailLines integer If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime timestamps boolean If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false. HTTP method GET Description read log of the specified Pod Table 14.32. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty 14.2.7. /api/v1/watch/namespaces/{namespace}/pods/{name} Table 14.33. Global path parameters Parameter Type Description name string name of the Pod namespace string object name and auth scope, such as for teams and projects Table 14.34. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Pod. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 14.35. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.8. /api/v1/namespaces/{namespace}/pods/{name}/status Table 14.36. Global path parameters Parameter Type Description name string name of the Pod namespace string object name and auth scope, such as for teams and projects Table 14.37. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Pod Table 14.38. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Pod Table 14.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 14.40. Body parameters Parameter Type Description body Patch schema Table 14.41. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Pod Table 14.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.43. Body parameters Parameter Type Description body Pod schema Table 14.44. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty 14.2.9. /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers Table 14.45. Global path parameters Parameter Type Description name string name of the Pod namespace string object name and auth scope, such as for teams and projects Table 14.46. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read ephemeralcontainers of the specified Pod Table 14.47. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update ephemeralcontainers of the specified Pod Table 14.48. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 14.49. Body parameters Parameter Type Description body Patch schema Table 14.50. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace ephemeralcontainers of the specified Pod Table 14.51. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.52. Body parameters Parameter Type Description body Pod schema Table 14.53. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty | [
"IP: An IP address allocated to the pod. Routable at least within the cluster."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/pod-v1 |
Chapter 1. OpenShift Container Platform storage overview | Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage Types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/storage/storage-overview |
Chapter 17. Configuring ingress cluster traffic | Chapter 17. Configuring ingress cluster traffic 17.1. Configuring ingress cluster traffic overview OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster. The methods are recommended, in order or preference: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller. Otherwise, use a Load Balancer, an External IP, or a NodePort . Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 17.2. Configuring ExternalIPs for services As a cluster administrator, you can designate an IP address block that is external to the cluster that can send traffic to services in the cluster. This functionality is generally most useful for clusters installed on bare-metal hardware. 17.2.1. Prerequisites Your network infrastructure must route traffic for the external IP addresses to your cluster. 17.2.2. About ExternalIP For non-cloud environments, OpenShift Container Platform supports the assignment of external IP addresses to a Service object spec.externalIPs[] field through the ExternalIP facility. By setting this field, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can be outside the service network defined for the cluster. A service configured with an ExternalIP functions similarly to a service with type=NodePort , allowing you to direct traffic to a local node for load balancing. You must configure your networking infrastructure to ensure that the external IP address blocks that you define are routed to the cluster. OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities: Restrictions on the use of external IP addresses by users through a configurable policy Allocation of an external IP address automatically to a service upon request Warning Disabled by default, use of ExternalIP functionality can be a security risk, because in-cluster traffic to an external IP address is directed to that service. This could allow cluster users to intercept sensitive traffic destined for external resources. Important This feature is supported only in non-cloud deployments. For cloud deployments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service. You can assign an external IP address in the following ways: Automatic assignment of an external IP OpenShift Container Platform automatically assigns an IP address from the autoAssignCIDRs CIDR block to the spec.externalIPs[] array when you create a Service object with spec.type=LoadBalancer set. In this case, OpenShift Container Platform implements a non-cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the following section. Manual assignment of an external IP OpenShift Container Platform uses the IP addresses assigned to the spec.externalIPs[] array when you create a Service object. You cannot specify an IP address that is already in use by another service. 17.2.2.1. Configuration for ExternalIP Use of an external IP address in OpenShift Container Platform is governed by the following fields in the Network.config.openshift.io CR named cluster : spec.externalIP.autoAssignCIDRs defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This can be simpler than having to manage the port space of a limited number of shared IP addresses when manually assigning ExternalIPs to services. If automatic assignment is enabled, a Service object with spec.type=LoadBalancer is allocated an external IP address. spec.externalIP.policy defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks defined by spec.externalIP.autoAssignCIDRs . If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes. Important You must ensure that the IP address block you assign terminates at one or more nodes in your cluster. OpenShift Container Platform supports both the automatic and manual assignment of IP addresses, and each address is guaranteed to be assigned to a maximum of one service. This ensures that each service can expose its chosen ports regardless of the ports exposed by other services. Note To use IP address blocks defined by autoAssignCIDRs in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. The following YAML describes a service with an external IP address configured: Example Service object with spec.externalIPs[] set apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253 17.2.2.2. Restrictions on the assignment of an external IP address As a cluster administrator, you can specify IP address blocks to allow and to reject. Restrictions apply only to users without cluster-admin privileges. A cluster administrator can always set the service spec.externalIPs[] field to any IP address. You configure IP address policy with a policy object defined by specifying the spec.ExternalIP.policy field. The policy object has the following shape: { "policy": { "allowedCIDRs": [], "rejectedCIDRs": [] } } When configuring policy restrictions, the following rules apply: If policy={} is set, then creating a Service object with spec.ExternalIPs[] set will fail. This is the default for OpenShift Container Platform. The behavior when policy=null is set is identical. If policy is set and either policy.allowedCIDRs[] or policy.rejectedCIDRs[] is set, the following rules apply: If allowedCIDRs[] and rejectedCIDRs[] are both set, then rejectedCIDRs[] has precedence over allowedCIDRs[] . If allowedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] will succeed only if the specified IP addresses are allowed. If rejectedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] will succeed only if the specified IP addresses are not rejected. 17.2.2.3. Example policy objects The examples that follow demonstrate several different policy configurations. In the following example, the policy prevents OpenShift Container Platform from creating any service with an external IP address specified: Example policy to reject any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {} ... In the following example, both the allowedCIDRs and rejectedCIDRs fields are set. Example policy that includes both allowed and rejected CIDR blocks apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24 ... In the following example, policy is set to null . If set to null , when inspecting the configuration object by entering oc get networks.config.openshift.io -o yaml , the policy field will not appear in the output. Example policy to allow any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null ... 17.2.3. ExternalIP address block configuration The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster . The Network CR is part of the config.openshift.io API group. Important During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster . Creating any other CR objects of this type is not supported. The following YAML describes the ExternalIP configuration: Network.config.openshift.io CR named cluster apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2 ... 1 Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed. 2 Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the spec.externalIP field in a Service object is not allowed. By default, no restrictions are defined. The following YAML describes the fields for the policy stanza: Network.config.openshift.io policy stanza policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2 1 A list of allowed IP address ranges in CIDR format. 2 A list of rejected IP address ranges in CIDR format. Example external IP configurations Several possible configurations for external IP address pools are displayed in the following examples: The following YAML describes a configuration that enables automatically assigned external IP addresses: Example configuration with spec.externalIP.autoAssignCIDRs set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: autoAssignCIDRs: - 192.168.132.254/29 The following YAML configures policy rules for the allowed and rejected CIDR ranges: Example configuration with spec.externalIP.policy set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32 17.2.4. Configure external IP address blocks for your cluster As a cluster administrator, you can configure the following ExternalIP settings: An ExternalIP address block used by OpenShift Container Platform to automatically populate the spec.clusterIP field for a Service object. A policy object to restrict what IP addresses may be manually assigned to the spec.clusterIP array of a Service object. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Optional: To display the current external IP configuration, enter the following command: USD oc describe networks.config cluster To edit the configuration, enter the following command: USD oc edit networks.config cluster Modify the ExternalIP configuration, as in the following example: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: 1 ... 1 Specify the configuration for the externalIP stanza. To confirm the updated ExternalIP configuration, enter the following command: USD oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}' 17.2.5. steps Configuring ingress cluster traffic for a service external IP 17.3. Configuring ingress cluster traffic using an Ingress Controller OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller. 17.3.1. Using Ingress Controllers and routes The Ingress Operator manages Ingress Controllers and wildcard DNS. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI. Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes. The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators. By default, every ingress controller in the cluster can admit any route created in any project in the cluster. The Ingress Controller: Has two replicas by default, which means it should be running on two worker nodes. Can be scaled up to have more replicas on more nodes. Note The procedures in this section require prerequisites performed by the cluster administrator. 17.3.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 17.3.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 17.3.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster. Use the oc get route command to find the route's host name: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None Use cURL to check that the host responds to a GET request: USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 17.3.5. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: # cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . 17.3.6. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Warning If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value HostNetwork for the endpointPublishingStrategy parameter. Doing so might cause issues. Use value NodePort instead of HostNetwork for endpointPublishingStrategy . Procedure Edit the router-internal.yaml file: # cat router-internal.yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . 17.3.7. Additional resources The Ingress Operator manages wildcard DNS. For more information, see Ingress Operator in OpenShift Container Platform , Installing a cluster on bare metal , and Installing a cluster on vSphere . 17.4. Configuring ingress cluster traffic using a load balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. 17.4.1. Using a load balancer to get traffic into the cluster If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster. A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing. Note If a pool is configured, it is done at the infrastructure level, not by a cluster administrator. Note The procedures in this section require prerequisites performed by the cluster administrator. 17.4.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 17.4.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 17.4.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster. Use the oc get route command to find the route's host name: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None Use cURL to check that the host responds to a GET request: USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 17.4.5. Creating a load balancer service Use the following procedure to create a load balancer service. Prerequisites Make sure that the project and service you want to expose exist. Procedure To create a load balancer service: Log in to OpenShift Container Platform. Load the project where the service you want to expose is located. USD oc project project1 Open a text file on the control plane node (also known as the master node) and paste the following text, editing the file as needed: Sample load balancer configuration file 1 Enter a descriptive name for the load balancer service. 2 Enter the same port that the service you want to expose is listening on. 3 Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature. 4 Enter Loadbalancer as the type. 5 Enter the name of the service. Note To restrict traffic through the load balancer to specific IP addresses, it is recommended to use the service.beta.kubernetes.io/load-balancer-source-ranges annotation rather than setting the loadBalancerSourceRanges field. With the annotation, you can more easily migrate to the OpenShift API, which will be implemented in a future release. Save and exit the file. Run the following command to create the service: USD oc create -f <file-name> For example: USD oc create -f mysql-lb.yaml Execute the following command to view the new service: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m The service has an external IP address automatically assigned if there is a cloud provider enabled. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address: USD curl <public-ip>:<port> For example: USD curl 172.29.121.74:3306 The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service: If you have a MySQL client, log in with the standard CLI command: USD mysql -h 172.30.131.89 -u admin -p Example output Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]> 17.5. Configuring ingress cluster traffic on AWS using a Network Load Balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a Network Load Balancer (NLB), which forwards the client's IP address to the node. You can configure an NLB on a new or existing AWS cluster. 17.5.1. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS. Warning This procedure causes an expected outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Procedure Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService If your default Ingress Controller has other customizations, ensure that you modify the file accordingly. Force replace the Ingress Controller YAML file: USD oc replace --force --wait -f ingresscontroller.yml Wait until the Ingress Controller is replaced. Expect serveral of minutes of outages. 17.5.2. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster. Prerequisites You must have an installed AWS cluster. PlatformStatus of the infrastructure resource must be AWS. To verify that the PlatformStatus is AWS, run: USD oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS Procedure Create an Ingress Controller backed by an AWS NLB on an existing cluster. Create the Ingress Controller manifest: USD cat ingresscontroller-aws-nlb.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB 1 Replace USDmy_ingress_controller with a unique name for the Ingress Controller. 2 Replace USDmy_unique_ingress_domain with a domain name that is unique among all Ingress Controllers in the cluster. 3 You can replace External with Internal to use an internal NLB. Create the resource in the cluster: USD oc create -f ingresscontroller-aws-nlb.yaml Important Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure. 17.5.3. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 17.5.4. Additional resources Installing a cluster on AWS with network customizations . For more information, see Network Load Balancer support on AWS . 17.6. Configuring ingress cluster traffic for a service external IP You can attach an external IP address to a service so that it is available to traffic outside the cluster. This is generally useful only for a cluster installed on bare metal hardware. The external network infrastructure must be configured correctly to route traffic to the service. 17.6.1. Prerequisites Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services . 17.6.2. Attaching an ExternalIP to a service You can attach an ExternalIP to a service. If your cluster is configured to allocate an ExternalIP automatically, you might not need to manually attach an ExternalIP to the service. Procedure Optional: To confirm what IP address ranges are configured for use with ExternalIP, enter the following command: USD oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}' If autoAssignCIDRs is set, OpenShift Container Platform automatically assigns an ExternalIP to a new Service object if the spec.externalIPs field is not specified. Attach an ExternalIP to the service. If you are creating a new service, specify the spec.externalIPs field and provide an array of one or more valid IP addresses. For example: apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: ... externalIPs: - 192.174.120.10 If you are attaching an ExternalIP to an existing service, enter the following command. Replace <name> with the service name. Replace <ip_address> with a valid ExternalIP address. You can provide multiple IP addresses separated by commas. USD oc patch svc <name> -p \ '{ "spec": { "externalIPs": [ "<ip_address>" ] } }' For example: USD oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' Example output "mysql-55-rhel7" patched To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first. USD oc get svc Example output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m 17.6.3. Additional resources Configuring ExternalIPs for services 17.7. Configuring ingress cluster traffic using a NodePort OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort . 17.7.1. Using a NodePort to get traffic into the cluster Use a NodePort -type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field. Important Using a node port requires additional port resources. A NodePort exposes the service on a static port on the node's IP address. NodePort s are in the 30000 to 32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, port 8080 may be exposed as port 31020 on the node. The administrator must ensure the external IP addresses are routed to the nodes. NodePort s and external IPs are independent and both can be used concurrently. Note The procedures in this section require prerequisites performed by the cluster administrator. 17.7.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 17.7.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 17.7.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject To expose a node port for the application, enter the following command. OpenShift Container Platform automatically selects an available port in the 30000-32767 range. USD oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator="service/v2" Example output service/nodejs-ex-nodeport exposed Optional: To confirm the service is available with a node port exposed, enter the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s Optional: To remove the service created automatically by the oc new-app command, enter the following command: USD oc delete svc nodejs-ex 17.7.5. Additional resources Configuring the node port service range | [
"apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253",
"{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2",
"policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32",
"oc describe networks.config cluster",
"oc edit networks.config cluster",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1",
"oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project myproject",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project myproject",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"cat router-internal.yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project myproject",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project myproject",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"oc project project1",
"apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5",
"oc create -f <file-name>",
"oc create -f mysql-lb.yaml",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m",
"curl <public-ip>:<port>",
"curl 172.29.121.74:3306",
"mysql -h 172.30.131.89 -u admin -p",
"Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc replace --force --wait -f ingresscontroller.yml",
"oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS",
"cat ingresscontroller-aws-nlb.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB",
"oc create -f ingresscontroller-aws-nlb.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'",
"apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: - 192.174.120.10",
"oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'",
"oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'",
"\"mysql-55-rhel7\" patched",
"oc get svc",
"NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m",
"oc adm policy add-cluster-role-to-user cluster-admin <user_name>",
"oc new-project myproject",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project myproject",
"oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator=\"service/v2\"",
"service/nodejs-ex-nodeport exposed",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s",
"oc delete svc nodejs-ex"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/configuring-ingress-cluster-traffic |
Upgrading from RHEL 6 to RHEL 7 | Upgrading from RHEL 6 to RHEL 7 Red Hat Enterprise Linux 7 Instructions for an in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/upgrading_from_rhel_6_to_rhel_7/index |
Chapter 13. Support | Chapter 13. Support 13.1. Support overview You can request assistance from Red Hat Support, report bugs, collect data about your environment, and monitor the health of your cluster and virtual machines (VMs) with the following tools. 13.1.1. Opening support tickets If you have encountered an issue that requires immediate assistance from Red Hat Support, you can submit a support case. To report a bug, you can create a Jira issue directly. 13.1.1.1. Submitting a support case To request support from Red Hat Support, follow the instructions for submitting a support case . It is helpful to collect debugging data to include with your support request. 13.1.1.1.1. Collecting data for Red Hat Support You can gather debugging information by performing the following steps: Collecting data about your environment Configure Prometheus and Alertmanager and collect must-gather data for OpenShift Container Platform and OpenShift Virtualization. must-gather tool for OpenShift Virtualization Configure and use the must-gather tool. Collecting data about VMs Collect must-gather data and memory dumps from VMs. 13.1.1.2. Creating a Jira issue To report a bug, you can create a Jira issue directly by filling out the form on the Create Issue page. 13.1.2. Web console monitoring You can monitor the health of your cluster and VMs by using the OpenShift Container Platform web console. The web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources. Table 13.1. Web console pages for monitoring and troubleshooting Page Description Overview page Cluster details, status, alerts, inventory, and resource usage Virtualization Overview tab OpenShift Virtualization resources, usage, alerts, and status Virtualization Top consumers tab Top consumers of CPU, memory, and storage Virtualization Migrations tab Progress of live migrations VirtualMachines VirtualMachine VirtualMachine details Metrics tab VM resource usage, storage, network, and migration VirtualMachines VirtualMachine VirtualMachine details Events tab List of VM events VirtualMachines VirtualMachine VirtualMachine details Diagnostics tab VM status conditions and volume snapshot status 13.2. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools: must-gather tool The must-gather tool collects diagnostic information, including resource definitions and service logs. Prometheus Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Alertmanager The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. For information about the OpenShift Container Platform monitoring stack, see About OpenShift Container Platform monitoring . 13.2.1. Collecting data about your environment Collecting data about your environment minimizes the time required to analyze and determine the root cause. Prerequisites Set the retention time for Prometheus metrics data to a minimum of seven days. Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. Record the exact number of affected nodes and virtual machines. Procedure Collect must-gather data for the cluster . Collect must-gather data for Red Hat OpenShift Data Foundation , if necessary. Collect must-gather data for OpenShift Virtualization . Collect Prometheus metrics for the cluster . 13.2.2. Collecting data about virtual machines Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause. Prerequisites Linux VMs: Install the latest QEMU guest agent . Windows VMs: Record the Windows patch update details. Install the latest VirtIO drivers . Install the latest QEMU guest agent . If Remote Desktop Protocol (RDP) is enabled, connect by using the desktop viewer to determine whether there is a problem with the connection software. Procedure Collect must-gather data for the VMs using the /usr/bin/gather script. Collect screenshots of VMs that have crashed before you restart them. Collect memory dumps from VMs before remediation attempts. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network. 13.2.3. Using the must-gather tool for OpenShift Virtualization You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image. The default data collection includes information about the following resources: OpenShift Virtualization Operator namespaces, including child objects OpenShift Virtualization custom resource definitions Namespaces that contain virtual machines Basic virtual machine definitions Instance types information is not currently collected by default; you can, however, run a command to optionally collect it. Procedure Run the following command to collect data about OpenShift Virtualization: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ -- /usr/bin/gather 13.2.3.1. must-gather tool options You can specify a combination of scripts and environment variables for the following options: Collecting detailed virtual machine (VM) information from a namespace Collecting detailed information about specified VMs Collecting image, image-stream, and image-stream-tags information Limiting the maximum number of parallel processes used by the must-gather tool 13.2.3.1.1. Parameters Environment variables You can specify environment variables for a compatible script. NS=<namespace_name> Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces. VM=<vm_name> Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable. PROS=<number_of_processes> Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5 . Important Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. Scripts Each script is compatible only with certain environment variable combinations. /usr/bin/gather Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the PROS variable. /usr/bin/gather --vms_details Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable. /usr/bin/gather --images Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the PROS variable. /usr/bin/gather --instancetypes Collect instance types information. This information is not currently collected by default; you can, however, optionally collect it. 13.2.3.1.2. Usage and examples Environment variables are optional. You can run a script by itself or with one or more compatible environment variables. Table 13.2. Compatible parameters Script Compatible environment variable /usr/bin/gather * PROS=<number_of_processes> /usr/bin/gather --vms_details * For a namespace: NS=<namespace_name> * For a VM: VM=<vm_name> NS=<namespace_name> * PROS=<number_of_processes> /usr/bin/gather --images * PROS=<number_of_processes> Syntax USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ -- <environment_variable_1> <environment_variable_2> <script_name> Default data collection parallel processes By default, five processes run in parallel. USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ -- PROS=5 /usr/bin/gather 1 1 You can modify the number of parallel processes by changing the default. Detailed VM information The following command collects detailed VM information for the my-vm VM in the mynamespace namespace: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1 1 The NS environment variable is mandatory if you use the VM environment variable. Image, image-stream, and image-stream-tags information The following command collects image, image-stream, and image-stream-tags information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ /usr/bin/gather --images Instance types information The following command collects instance types information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 \ /usr/bin/gather --instancetypes 13.3. Troubleshooting OpenShift Virtualization provides tools and logs for troubleshooting virtual machines (VMs) and virtualization components. You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the oc CLI tool. 13.3.1. Events OpenShift Container Platform events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues. VM events: Navigate to the Events tab of the VirtualMachine details page in the web console. Namespace events You can view namespace events by running the following command: USD oc get events -n <namespace> See the list of events for details about specific events. Resource events You can view resource events by running the following command: USD oc describe <resource> <resource_name> 13.3.2. Pod logs You can view logs for OpenShift Virtualization pods by using the web console or the CLI. You can also view aggregated logs by using the LokiStack in the web console. 13.3.2.1. Configuring OpenShift Virtualization pod log verbosity You can configure the verbosity level of OpenShift Virtualization pod logs by editing the HyperConverged custom resource (CR). Procedure To set log verbosity for specific components, open the HyperConverged CR in your default text editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the log level for one or more components by editing the spec.logVerbosityConfig stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6 1 The log verbosity value must be an integer in the range 1-9 , where a higher number indicates a more detailed log. In this example, the virtAPI component logs are exposed if their priority level is 5 or higher. Apply your changes by saving and exiting the editor. 13.3.2.2. Viewing virt-launcher pod logs with the web console You can view the virt-launcher pod logs for a virtual machine by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines . Select a virtual machine to open the VirtualMachine details page. On the General tile, click the pod name to open the Pod details page. Click the Logs tab to view the logs. 13.3.2.3. Viewing OpenShift Virtualization pod logs with the CLI You can view logs for the OpenShift Virtualization pods by using the oc CLI tool. Procedure View a list of pods in the OpenShift Virtualization namespace by running the following command: USD oc get pods -n openshift-cnv Example 13.1. Example output NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m View the pod log by running the following command: USD oc logs -n openshift-cnv <pod_name> Note If a pod fails to start, you can use the -- option to view logs from the last attempt. To monitor log output in real time, use the -f option. Example 13.2. Example output {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"} {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"} {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"} {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"} 13.3.3. Guest system logs Viewing the boot logs of VM guests can help diagnose issues. You can configure access to guests' logs and view them by using either the OpenShift Container Platform web console or the oc CLI. This feature is disabled by default. If a VM does not explicitly have this setting enabled or disabled, it inherits the cluster-wide default setting. Important If sensitive information such as credentials or other personally identifiable information (PII) is written to the serial console, it is logged with all other visible text. Red Hat recommends using SSH to send sensitive data instead of the serial console. 13.3.3.1. Enabling default access to VM guest system logs with the web console You can enable default access to VM guest system logs by using the web console. Procedure From the side menu, click Virtualization Overview . Click the Settings tab. Click Cluster Guest management . Set Enable guest system log access to on. 13.3.3.2. Enabling default access to VM guest system logs with the CLI You can enable default access to VM guest system logs by editing the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Update the disableSerialConsoleLog value. For example: kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #... 1 Set the value of disableSerialConsoleLog to false if you want serial console access to be enabled on VMs by default. 13.3.3.3. Setting guest system log access for a single VM with the web console You can configure access to VM guest system logs for a single VM by using the web console. This setting takes precedence over the cluster-wide default configuration. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Configuration tab. Set Guest system log access to on or off. 13.3.3.4. Setting guest system log access for a single VM with the CLI You can configure access to VM guest system logs for a single VM by editing the VirtualMachine CR. This setting takes precedence over the cluster-wide default configuration. Procedure Edit the virtual machine manifest by running the following command: USD oc edit vm <vm_name> Update the value of the logSerialConsole field. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #... 1 To enable access to the guest's serial console log, set the logSerialConsole value to true . Apply the new configuration to the VM by running the following command: USD oc apply vm <vm_name> Optional: If you edited a running VM, restart the VM to apply the new configuration. For example: USD virtctl restart <vm_name> -n <namespace> 13.3.3.5. Viewing guest system logs with the web console You can view the serial console logs of a virtual machine (VM) guest by using the web console. Prerequisites Guest system log access is enabled. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Diagnostics tab. Click Guest system logs to load the serial console. 13.3.3.6. Viewing guest system logs with the CLI You can view the serial console logs of a VM guest by running the oc logs command. Prerequisites Guest system log access is enabled. Procedure View the logs by running the following command, substituting your own values for <namespace> and <vm_name> : USD oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log 13.3.4. Log aggregation You can facilitate troubleshooting by aggregating and filtering logs. 13.3.4.1. Viewing aggregated OpenShift Virtualization logs with the LokiStack You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console. Prerequisites You deployed the LokiStack. Procedure Navigate to Observe Logs in the web console. Select application , for virt-launcher pod logs, or infrastructure , for OpenShift Virtualization control plane pods and containers, from the log type list. Click Show Query to display the query field. Enter the LogQL query in the query field and click Run Query to display the filtered logs. 13.3.4.2. OpenShift Virtualization LogQL queries You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe Logs page in the web console. The default log type is infrastructure . The virt-launcher log type is application . Optional: You can include or exclude strings or regular expressions by using line filter expressions. Note If the query matches a large number of logs, the query might time out. Table 13.3. OpenShift Virtualization LogQL example queries Component LogQL query All {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" cdi-apiserver cdi-deployment cdi-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="storage" hco-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="deployment" kubemacpool {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="network" virt-api virt-controller virt-handler virt-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="compute" ssp-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="schedule" Container {log_type=~".+",kubernetes_container_name=~"<container>|<container>"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" 1 Specify one or more containers separated by a pipe ( | ). virt-launcher You must select application from the log type list before running this query. {log_type=~".+", kubernetes_container_name="compute"}|json |!= "custom-ga-command" 1 1 |!= "custom-ga-command" excludes libvirt logs that contain the string custom-ga-command . ( BZ#2177684 ) You can filter log lines to include or exclude strings or regular expressions by using line filter expressions. Table 13.4. Line filter expressions Line filter expression Description |= "<string>" Log line contains string != "<string>" Log line does not contain string |~ "<regex>" Log line contains regular expression !~ "<regex>" Log line does not contain regular expression Example line filter expression {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |= "error" != "timeout" Additional resources for LokiStack and LogQL About log storage Deploying the LokiStack LogQL log queries in the Grafana documentation 13.3.5. Common error messages The following error messages might appear in OpenShift Virtualization logs: ErrImagePull or ImagePullBackOff Indicates an incorrect deployment configuration or problems with the images that are referenced. 13.3.6. Troubleshooting data volumes You can check the Conditions and Events sections of the DataVolume object to analyze and resolve issues. 13.3.6.1. About data volume conditions and events You can diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command: USD oc describe dv <DataVolume> The Conditions section displays the following Types : Bound Running Ready The Events section provides the following additional information: Type of event Reason for logging Source of the event Message containing additional diagnostic information. The output from oc describe does not always contains Events . An event is generated when the Status , Reason , or Message changes. Both conditions and events react to changes in the state of the data volume. For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well. 13.3.6.2. Analyzing data volume conditions and events By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. There are many different combinations of conditions. Each must be evaluated in its unique context. Examples of various combinations follow. Bound - A successfully bound PVC displays in this example. Note that the Type is Bound , so the Status is True . If the PVC is not bound, the Status is False . When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True . The Message indicates which PVC owns the data volume. Message , in the Events section, provides further details including how long the PVC has been bound ( Age ) and by what resource ( From ), in this case datavolume-controller : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound Running - In this case, note that Type is Running and Status is False , indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False . However, note that Reason is Completed and the Message field indicates Import Complete . In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404 , listed in the Events section's first Warning . From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume: Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found Ready - If Type is Ready and Status is True , then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready | [
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 -- /usr/bin/gather",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 -- <environment_variable_1> <environment_variable_2> <script_name>",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 -- PROS=5 /usr/bin/gather 1",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 /usr/bin/gather --images",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 /usr/bin/gather --instancetypes",
"oc get events -n <namespace>",
"oc describe <resource> <resource_name>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6",
"oc get pods -n openshift-cnv",
"NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m",
"oc logs -n openshift-cnv <pod_name>",
"{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #",
"oc apply vm <vm_name>",
"virtctl restart <vm_name> -n <namespace>",
"oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"",
"{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"",
"{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"",
"oc describe dv <DataVolume>",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/virtualization/support |
Chapter 1. The basics of Ceph configuration | Chapter 1. The basics of Ceph configuration As a storage administrator, you need to have a basic understanding of how to view the Ceph configuration, and how to set the Ceph configuration options for the Red Hat Ceph Storage cluster. You can view and set the Ceph configuration options at runtime. Prerequisites Installation of the Red Hat Ceph Storage software. 1.1. Ceph configuration All Red Hat Ceph Storage clusters have a configuration, which defines: Cluster Identity Authentication settings Ceph daemons Network configuration Node names and addresses Paths to keyrings Paths to OSD log files Other runtime options A deployment tool, such as cephadm , will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a Red Hat Ceph Storage cluster without using a deployment tool. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 1.2. The Ceph configuration database The Ceph Monitor manages a configuration database of Ceph options that centralize configuration management by storing configuration options for the entire storage cluster. By centralizing the Ceph configuration in a database, this simplifies storage cluster administration. The priority order that Ceph uses to set options is: Compiled-in default values Ceph cluster configuration database Local ceph.conf file Runtime override, using the ceph daemon DAEMON-NAME config set or ceph tell DAEMON-NAME injectargs commands There are still a few Ceph options that can be defined in the local Ceph configuration file, which is /etc/ceph/ceph.conf by default. However, ceph.conf has been deprecated for Red Hat Ceph Storage 8. cephadm uses a basic ceph.conf file that only contains a minimal set of options for connecting to Ceph Monitors, authenticating, and fetching configuration information. In most cases, cephadm uses only the mon_host option. To avoid using ceph.conf only for the mon_host option, use DNS SRV records to perform operations with Monitors. Important Red Hat recommends that you use the assimilate-conf administrative command to move valid options into the configuration database from the ceph.conf file. For more information about assimilate-conf , see Administrative Commands. Ceph allows you to make changes to the configuration of a daemon at runtime. This capability can be useful for increasing or decreasing the logging output, by enabling or disabling debug settings, and can even be used for runtime optimization. Note When the same option exists in the configuration database and the Ceph configuration file, the configuration database option has a lower priority than what is set in the Ceph configuration file. Sections and Masks Just as you can configure Ceph options globally, per daemon type, or by a specific daemon in the Ceph configuration file, you can also configure the Ceph options in the configuration database according to these sections: Section Description global Affects all daemons and clients. mon Affects all Ceph Monitors. mgr Affects all Ceph Managers. osd Affects all Ceph OSDs. mds Affects all Ceph Metadata Servers. client Affects all Ceph Clients, including mounted file systems, block devices, and RADOS Gateways. Ceph configuration options can have a mask associated with them. These masks can further restrict which daemons or clients the options apply to. Masks have two forms: type:location The type is a CRUSH property, for example, rack or host . The location is a value for the property type. For example, host:foo limits the option only to daemons or clients running on the foo host. Example class:device-class The device-class is the name of the CRUSH device class, such as hdd or ssd . For example, class:ssd limits the option only to Ceph OSDs backed by solid state drives (SSD). This mask has no effect on non-OSD daemons of clients. Example Administrative Commands The Ceph configuration database can be administered with the subcommand ceph config ACTION . These are the actions you can do: ls Lists the available configuration options. dump Dumps the entire configuration database of options for the storage cluster. get WHO Dumps the configuration for a specific daemon or client. For example, WHO can be a daemon, like mds.a . set WHO OPTION VALUE Sets a configuration option in the Ceph configuration database, where WHO is the target daemon, OPTION is the option to set, and VALUE is the desired value. show WHO Shows the reported running configuration for a running daemon. These options might be different from those stored by the Ceph Monitors if there is a local configuration file in use or options have been overridden on the command line or at run time. Also, the source of the option values is reported as part of the output. assimilate-conf -i INPUT_FILE -o OUTPUT_FILE Assimilate a configuration file from the INPUT_FILE and move any valid options into the Ceph Monitors' configuration database. Any options that are unrecognized, invalid, or cannot be controlled by the Ceph Monitor return in an abbreviated configuration file stored in the OUTPUT_FILE . This command can be useful for transitioning from legacy configuration files to a centralized configuration database. Note that when you assimilate a configuration and the Monitors or other daemons have different configuration values set for the same set of options, the end result depends on the order in which the files are assimilated. help OPTION -f json-pretty Displays help for a particular OPTION using a JSON-formatted output. Additional Resources For more information about the command, see Setting a specific configuration at runtime . 1.3. Using the Ceph metavariables Metavariables simplify Ceph storage cluster configuration dramatically. When a metavariable is set in a configuration value, Ceph expands the metavariable into a concrete value. Metavariables are very powerful when used within the [global] , [osd] , [mon] , or [client] sections of the Ceph configuration file. However, you can also use them with the administration socket. Ceph metavariables are similar to Bash shell expansion. Ceph supports the following metavariables: USDcluster Description Expands to the Ceph storage cluster name. Useful when running multiple Ceph storage clusters on the same hardware. Example /etc/ceph/USDcluster.keyring Default ceph USDtype Description Expands to one of osd or mon , depending on the type of the instant daemon. Example /var/lib/ceph/USDtype USDid Description Expands to the daemon identifier. For osd.0 , this would be 0 . Example /var/lib/ceph/USDtype/USDcluster-USDid USDhost Description Expands to the host name of the instant daemon. USDname Description Expands to USDtype.USDid . Example /var/run/ceph/USDcluster-USDname.asok 1.4. Viewing the Ceph configuration at runtime The Ceph configuration files can be viewed at boot time and run time. Prerequisites Root-level access to the Ceph node. Access to admin keyring. Procedure To view a runtime configuration, log in to a Ceph node running the daemon and execute: Syntax To see the configuration for osd.0 , you can log into the node containing osd.0 and execute this command: Example For additional options, specify a daemon and help . Example 1.5. Viewing a specific configuration at runtime Configuration settings for Red Hat Ceph Storage can be viewed at runtime from the Ceph Monitor node. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Log into a Ceph node and execute: Syntax Example 1.6. Setting a specific configuration at runtime To set a specific Ceph configuration at runtime, use the ceph config set command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor or OSD nodes. Procedure Set the configuration on all Monitor or OSD daemons : Syntax Example Validate that the option and value are set: Example To remove the configuration option from all daemons: Syntax Example To set the configuration for a specific daemon: Syntax Example To validate that the configuration is set for the specified daemon: Example To remove the configuration for a specific daemon: Syntax Example Note If you use a client that does not support reading options from the configuration database, or if you still need to use ceph.conf to change your cluster configuration for other reasons, run the following command: You must maintain and distribute the ceph.conf file across the storage cluster. 1.7. OSD Memory Target BlueStore keeps OSD heap memory usage under a designated target size with the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. Use this option when TCMalloc is configured as the memory allocator, and when the bluestore_cache_autotune option in BlueStore is set to true . Ceph OSD memory caching is more important when the block device is slow; for example, traditional hard drives, because the benefit of a cache hit is much higher than it would be with a solid state drive. However, this must be weighed into a decision to colocate OSDs with other services, such as in a hyper-converged infrastructure (HCI) or other applications. 1.7.1. Setting the OSD memory target Use the osd_memory_target option to set the maximum memory threshold for all OSDs in the storage cluster, or for specific OSDs. An OSD with an osd_memory_target option set to 16 GB might use up to 16 GB of memory. Note Configuration options for individual OSDs take precedence over the settings for all OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all hosts in the storage cluster. Procedure To set osd_memory_target for all OSDs in the storage cluster: Syntax VALUE is the number of GBytes of memory to be allocated to each OSD in the storage cluster. To set osd_memory_target for a specific OSD in the storage cluster: Syntax .id is the ID of the OSD and VALUE is the number of GB of memory to be allocated to the specified OSD. For example, to configure the OSD with ID 8 to use up to 16 GBytes of memory: Example To set an individual OSD to use one maximum amount of memory and configure the rest of the OSDs to use another amount, specify the individual OSD first: Example Additional resources To configure Red Hat Ceph Storage to autotune OSD memory usage, see Automatically tuning OSD memory in the Operations Guide . 1.8. Automatically tuning OSD memory The OSD daemons adjust the memory consumption based on the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs. Important By default, the osd_memory_target_autotune parameter is set to true in the Red Hat Ceph Storage cluster. Syntax Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio , which defaults to 0.7 of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune is false, and then divide by the remaining OSDs. The osd_memory_target parameter is calculated as follows: Syntax SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations: Alertmanager: 1 GB Grafana: 1 GB Ceph Manager: 4 GB Ceph Monitor: 2 GB Node-exporter: 1 GB Prometheus: 1 GB For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target is 7860684936 . The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column. Note The default setting of osd_memory_target_autotune true is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio can be set to 0.2 to reduce the memory consumption of Ceph. Example You can manually set a specific memory target for an OSD in the storage cluster. Example You can manually set a specific memory target for an OSD host in the storage cluster. Syntax Example Note Enabling osd_memory_target_autotune overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled, set the _no_autotune_memory label on the host. Syntax You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target. Example 1.9. MDS Memory Cache Limit MDS servers keep their metadata in a separate storage pool, named cephfs_metadata , and are the users of Ceph OSDs. For Ceph File Systems, MDS servers have to support an entire Red Hat Ceph Storage cluster, not just a single storage device within the storage cluster, so their memory requirements can be significant, particularly if the workload consists of small-to-medium-size files, where the ratio of metadata to data is much higher. Example: Set the mds_cache_memory_limit to 2000000000 bytes Note For a large Red Hat Ceph Storage cluster with a metadata-intensive workload, do not put an MDS server on the same node as other memory-intensive services, doing so gives you the option to allocate more memory to MDS, for example, sizes greater than 100 GB. Additional Resources See Metadata Server cache size limits in Red Hat Ceph Storage File System Guide . See the general Ceph configuration options in Configuration options for specific option descriptions and usage. | [
"ceph config set osd/host:magna045 debug_osd 20",
"ceph config set osd/class:hdd osd_max_backfills 8",
"ceph daemon DAEMON_TYPE . ID config show",
"ceph daemon osd.0 config show",
"ceph daemon osd.0 help",
"ceph daemon DAEMON_TYPE . ID config get PARAMETER",
"ceph daemon osd.0 config get public_addr",
"ceph config set DAEMON CONFIG-OPTION VALUE",
"ceph config set osd debug_osd 10",
"ceph config dump osd advanced debug_osd 10/10",
"ceph config rm DAEMON CONFIG-OPTION VALUE",
"ceph config rm osd debug_osd",
"ceph config set DAEMON . DAEMON-NUMBER CONFIG-OPTION VALUE",
"ceph config set osd.0 debug_osd 10",
"ceph config dump osd.0 advanced debug_osd 10/10",
"ceph config rm DAEMON . DAEMON-NUMBER CONFIG-OPTION",
"ceph config rm osd.0 debug_osd",
"ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf false",
"ceph config set osd osd_memory_target VALUE",
"ceph config set osd.id osd_memory_target VALUE",
"ceph config set osd.8 osd_memory_target 16G",
"ceph config set osd osd_memory_target 16G ceph config set osd.8 osd_memory_target 8G",
"ceph config set osd osd_memory_target_autotune true",
"osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )",
"ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2",
"ceph config set osd.123 osd_memory_target 7860684936",
"ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES",
"ceph config set osd/host:host01 osd_memory_target 1000000000",
"ceph orch host label add HOSTNAME _no_autotune_memory",
"ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G",
"ceph_conf_overrides: mds: mds_cache_memory_limit=2000000000"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/the-basics-of-ceph-configuration |
Chapter 1. Introduction to scaling storage | Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware ovirt-csi-sc on Red Hat Virtualization managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure Red Hat Virtualization VMware Bare metal | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/scaling_storage/scaling-overview_rhodf |
Chapter 5. Container images with Go Toolset | Chapter 5. Container images with Go Toolset You can build your own Go Toolset containers from either Red Hat Enterprise Linux container images or Red Hat Universal Base Images (UBI). 5.1. Red Hat Enterprise Linux Go Toolset container images contents The Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 container images of Go Toolset contain the following packages: Component Version Package Go 1.22 RHEL 8 - go-toolset-1.22 RHEL 9 - go-toolset-1.22 5.2. Accessing Red Hat Enterprise Linux container images Pull the container image from the Red Hat registry before running your container and performing actions. Procedure To pull the required image, run: For an image based on Red Hat Enterprise Linux 8: For an image based on Red Hat Enterprise Linux 9: 5.3. Accessing the UBI Go Toolset container image on RHEL 8 On RHEL 8, install the UBI Go Toolset container image to access Go Toolset. Alternatively, you can install Go Toolset to the RHEL 8 base UBI container image. For further information, see Accessing Go Toolset from the base UBI container image on RHEL 8 . Procedure To pull the UBI Go Toolset container image from the Red Hat registry, run: On Red Hat Enterprise Linux 8 On Red Hat Enterprise Linux 9 5.4. Accessing Go Toolset from the base UBI container image on RHEL 8 On RHEL 8, Go Toolset packages are part of the Red Hat Universal Base Images (UBIs) repositories, which means you can install Go Toolset as an addition to the base UBI container image. To keep the container image size small, install only individual packages instead of the entire Go Toolset. Alternatively, you can install the UBI Go Toolset container image to access Go Toolset. For further information, see Accessing the UBI Go Toolset container image on RHEL 8 . Prerequisites An existing Containerfile. For information on creating Containerfiles, see the Dockerfile reference page. Procedure To create a container image containing Go Toolset, add the following lines to your Containerfile: To create a container image containing an individual package only, add the following lines to your Containerfile: Replace < package-name > with the name of the package you want to install. 5.5. Additional resources Go Toolset Container Images in the Red Hat Container Registry . For more information on Red Hat UBI images, see Working with Container Images . For more information on Red Hat UBI repositories, see Universal Base Images (UBI): Images, repositories, packages, and source code . | [
"podman pull registry.redhat.io/rhel8/go-toolset",
"podman pull registry.redhat.io/rhel9/go-toolset",
"podman pull registry.access.redhat.com/ubi8/go-toolset",
"podman pull registry.access.redhat.com/ubi9/go-toolset",
"FROM registry.access.redhat.com/ubi8/ubi: latest RUN yum module install -y go-toolset",
"RUN yum install -y < package-name >"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.22_toolset/assembly_container-images-with-go-toolset_using-go-toolset |
Chapter 3. Disk partitions | Chapter 3. Disk partitions To divide a disk into one or more logical areas, use the disk partitioning utility. It enables separate management of each partition. 3.1. Overview of partitions The hard disk stores information about the location and size of each disk partition in the partition table. Using information from the partition table, the operating system treats each partition as a logical disk. Some of the advantages of disk partitioning include: Reduce the likelihood of administrative oversights of Physical Volumes Ensure sufficient backup Provide efficient disk management Additional resources What are the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM in between? (Red Hat Knowledgebase) 3.2. Comparison of partition table types To enable partitions on a device, format a block device with different types of partition tables. The following table compares the properties of different types of partition tables that you can create on a block device. Note This section does not cover the DASD partition table, which is specific to the IBM Z architecture. Table 3.1. Partition table types Partition table Maximum number of partitions Maximum partition size Master Boot Record (MBR) 4 primary, or 3 primary and 1 extended partition with 12 logical partitions 2 TiB if using 512 b sector drives 16 TiB if using 4 k sector drives GUID Partition Table (GPT) 128 8 ZiB if using 512 b sector drives 64 ZiB if using 4 k sector drives Additional resources Configuring a Linux instance on IBM Z What you should know about DASD 3.3. MBR disk partitions The partition table is stored at the very start of the disk, before any file system or user data. For a more clear example, the partition table is shown as being separate in the following diagrams. Figure 3.1. Disk with MBR partition table As the diagram shows, the partition table is divided into four sections of four unused primary partitions. A primary partition is a partition on a hard disk drive that contains only one logical drive (or section). Each logical drive holds the information necessary to define a single partition, meaning that the partition table can define no more than four primary partitions. Each partition table entry contains important characteristics of the partition: The points on the disk where the partition starts and ends The state of the partition, as only one partition can be flagged as active The type of partition The starting and ending points define the size and location of the partition on the disk. Some of the operating systems boot loaders use the active flag. That means that the operating system in the partition that is marked "active" is booted. The type is a number that identifies the anticipated usage of a partition. Some operating systems use the partition type to: Denote a specific file system type Flag the partition as being associated with a particular operating system Indicate that the partition contains a bootable operating system The following diagram shows an example of a drive with a single partition. In this example, the first partition is labeled as DOS partition type: Figure 3.2. Disk with a single partition Additional resources MBR partition types 3.4. Extended MBR partitions To create additional partitions, if needed, set the type to extended . An extended partition is similar to a disk drive. It has its own partition table, which points to one or more logical partitions, contained entirely within the extended partition. The following diagram shows a disk drive with two primary partitions, and one extended partition containing two logical partitions, along with some unpartitioned free space. Figure 3.3. Disk with both two primary and an extended MBR partitions You can have only up to four primary and extended partitions, but there is no fixed limit to the number of logical partitions. As a limit in Linux to access partitions, a single disk drive allows maximum 15 logical partitions. 3.5. MBR partition types The table below shows a list of some of the most commonly used MBR partition types and hexadecimal numbers to represent them. Table 3.2. MBR partition types MBR partition type Value MBR partition type Value Empty 00 Novell Netware 386 65 DOS 12-bit FAT 01 PIC/IX 75 XENIX root O2 Old MINIX 80 XENIX usr O3 Linux/MINUX 81 DOS 16-bit ⇐32M 04 Linux swap 82 Extended 05 Linux native 83 DOS 16-bit >=32 06 Linux extended 85 OS/2 HPFS 07 Amoeba 93 AIX 08 Amoeba BBT 94 AIX bootable 09 BSD/386 a5 OS/2 Boot Manager 0a OpenBSD a6 Win95 FAT32 0b NEXTSTEP a7 Win95 FAT32 (LBA) 0c BSDI fs b7 Win95 FAT16 (LBA) 0e BSDI swap b8 Win95 Extended (LBA) 0f Syrinx c7 Venix 80286 40 CP/M db Novell 51 DOS access e1 PRep Boot 41 DOS R/O e3 GNU HURD 63 DOS secondary f2 Novell Netware 286 64 BBT ff 3.6. GUID partition table The GUID partition table (GPT) is a partitioning scheme based on the Globally Unique Identifier (GUID). GPT deals with the limitations of the Mater Boot Record (MBR) partition table. The MBR partition table cannot address storage larger than 2 TiB, equal to approximately 2.2 TB. Instead, GPT supports hard disks with larger capacity. The maximum addressable disk size is 8 ZiB, when using 512b sector drives, and 64 ZiB, when using 4096b sector drives. In addition, by default, GPT supports creation of up to 128 primary partitions. Extend the maximum amount of primary partitions by allocating more space to the partition table. Note A GPT has partition types based on GUIDs. Certain partitions require a specific GUID. For example, the system partition for Extensible Firmware Interface (EFI) boot loaders require GUID C12A7328-F81F-11D2-BA4B-00A0C93EC93B . GPT disks use logical block addressing (LBA) and a partition layout as follows: For backward compatibility with MBR disks, the system reserves the first sector (LBA 0) of GPT for MBR data, and applies the name "protective MBR". Primary GPT The header begins on the second logical block (LBA 1) of the device. The header contains the disk GUID, the location of the primary partition table, the location of the secondary GPT header, and CRC32 checksums of itself, and the primary partition table. It also specifies the number of partition entries on the table. By default, the primary GPT includes 128 partition entries. Each partition has an entry size of 128 bytes, a partition type GUID and a unique partition GUID. Secondary GPT For recovery, it is useful as a backup table in case the primary partition table is corrupted. The last logical sector of the disk contains the secondary GPT header and recovers GPT information, in case the primary header is corrupted. It contains: The disk GUID The location of the secondary partition table and the primary GPT header CRC32 checksums of itself The secondary partition table The number of possible partition entries Figure 3.4. Disk with a GUID Partition Table Important For a successful installation of the boot loader onto a GPT disk a BIOS boot partition must be present. Reuse is possible only if the disk already contains a BIOS boot partition. This includes disks initialized by the Anaconda installation program. 3.7. Partition types There are multiple ways to manage partition types: The fdisk utility supports the full range of partition types by specifying hexadecimal codes. The systemd-gpt-auto-generator , a unit generator utility, uses the partition type to automatically identify and mount devices. The parted utility maps out the partition type with flags . The parted utility handles only certain partition types, for example LVM, swap or RAID. The parted utility supports setting the following flags: boot root swap hidden raid lvm lba legacy_boot irst esp palo On Red Hat Enterprise Linux 9 with parted 3.5, you can use the additional flags chromeos_kernel and bls_boot . The parted utility optionally accepts a file system type argument while creating a partition. For a list of the required conditions, see Creating a partition with parted . Use the value to: Set the partition flags on MBR. Set the partition UUID type on GPT. For example, the swap , fat , or hfs file system types set different GUIDs. The default value is the Linux Data GUID. The argument does not modify the file system on the partition. It only differentiates between the supported flags and GUIDs. The following file system types are supported: xfs ext2 ext3 ext4 fat16 fat32 hfs hfs+ linux-swap ntfs reiserfs 3.8. Partition naming scheme Red Hat Enterprise Linux uses a file-based naming scheme, with file names in the form of /dev/ xxyN . Device and partition names consist of the following structure: /dev/ Name of the directory that contains all device files. Hard disks contain partitions, thus the files representing all possible partitions are located in /dev . xx The first two letters of the partition name indicate the type of device that contains the partition. y This letter indicates the specific device containing the partition. For example, /dev/sda for the first hard disk and /dev/sdb for the second. You can use more letters in systems with more than 26 drives, for example, /dev/sdaa1 . N The final letter indicates the number to represent the partition. The first four (primary or extended) partitions are numbered 1 through 4 . Logical partitions start at 5 . For example, /dev/sda3 is the third primary or extended partition on the first hard disk, and /dev/sdb6 is the second logical partition on the second hard disk. Drive partition numbering applies only to MBR partition tables. Note that N does not always mean partition. Note Even if Red Hat Enterprise Linux can identify and refer to all types of disk partitions, it might not be able to read the file system and therefore access stored data on every partition type. However, in many cases, it is possible to successfully access data on a partition dedicated to another operating system. 3.9. Mount points and disk partitions In Red Hat Enterprise Linux, each partition forms a part of the storage, necessary to support a single set of files and directories. Mounting a partition makes the storage of that partition available, starting at the specified directory known as a mount point . For example, if partition /dev/sda5 is mounted on /usr/ , it means that all files and directories under /usr/ physically reside on /dev/sda5 . The file /usr/share/doc/FAQ/txt/Linux-FAQ resides on /dev/sda5 , while the file /etc/gdm/custom.conf does not. Continuing the example, it is also possible that one or more directories below /usr/ would be mount points for other partitions. For example, /usr/local/man/whatis resides on /dev/sda7 , rather than on /dev/sda5 , if /usr/local includes a mounted /dev/sda7 partition. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/disk-partitions_managing-storage-devices |
9.2. The Graphical Installation Program User Interface | 9.2. The Graphical Installation Program User Interface If you have used a graphical user interface (GUI) before, you are already familiar with this process; use your mouse to navigate the screens, click buttons, or enter text fields. You can also navigate through the installation using the keyboard. The Tab key allows you to move around the screen, the Up and Down arrow keys to scroll through lists, + and - keys expand and collapse lists, while Space and Enter selects or removes from selection a highlighted item. You can also use the Alt + X key command combination as a way of clicking on buttons or making other screen selections, where X is replaced with any underlined letter appearing within that screen. Note If you are using an x86, AMD64, or Intel 64 system, and you do not wish to use the GUI installation program, the text mode installation program is also available. To start the text mode installation program, use the following command at the boot: prompt: Refer to Section 7.1.2, "The Boot Menu" for a description of the Red Hat Enterprise Linux boot menu and to Section 8.1, "The Text Mode Installation Program User Interface" for a brief overview of text mode installation instructions. It is highly recommended that installs be performed using the GUI installation program. The GUI installation program offers the full functionality of the Red Hat Enterprise Linux installation program, including LVM configuration which is not available during a text mode installation. Users who must use the text mode installation program can follow the GUI installation instructions and obtain all needed information. 9.2.1. Screenshots During Installation Anaconda allows you to take screenshots during the installation process. At any time during installation, press Shift + Print Screen and anaconda will save a screenshot to /root/anaconda-screenshots . If you are performing a Kickstart installation, use the autostep --autoscreenshot option to generate a screenshot of each step of the installation automatically. Refer to Section 32.3, "Creating the Kickstart File" for details of configuring a Kickstart file. | [
"linux text"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-guimode-interface-x86 |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.14 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/index |
Chapter 1. Integrating with image registries | Chapter 1. Integrating with image registries Red Hat Advanced Cluster Security for Kubernetes (RHACS) integrates with a variety of image registries so that you can understand your images and apply security policies for image usage. When you integrate with image registries, you can view important image details, such as image creation date and Dockerfile details (including image layers). After you integrate RHACS with your registry, you can scan images, view image components, and apply security policies to images before or after deployment. Note When you integrate with an image registry, RHACS does not scan all images in your registry. RHACS only scans the images when you: Use the images in deployments Use the roxctl CLI to check images Use a continuous integration (CI) system to enforce security policies You can integrate RHACS with major image registries, including: Amazon Elastic Container Registry (ECR) Docker Hub Google Container Registry (GCR) Google Artifact Registry IBM Cloud Container Registry (ICR) JFrog Artifactory Microsoft Azure Container Registry (ACR) Red Hat Quay Red Hat container registries Sonatype Nexus GitHub container registry (GHCR) Any other registry that uses the Docker Registry HTTP API 1.1. Automatic configuration Red Hat Advanced Cluster Security for Kubernetes includes default integrations with standard registries, such as Docker Hub and others. It can also automatically configure integrations based on artifacts found in the monitored clusters, such as image pull secrets. Usually, you do not need to configure registry integrations manually. Important If you use a Google Container Registry (GCR), Red Hat Advanced Cluster Security for Kubernetes does not create a registry integration automatically. If you use Red Hat Advanced Cluster Security Cloud Service, automatic configuration is unavailable, and you must manually create registry integrations. 1.2. Amazon ECR integrations For Amazon ECR integrations, Red Hat Advanced Cluster Security for Kubernetes automatically generates ECR registry integrations if the following conditions are met: The cloud provider for the cluster is AWS. The nodes in your cluster have an Instance Identity and Access Management (IAM) Role association and the Instance Metadata Service is available in the nodes. For example, when using Amazon Elastic Kubernetes Service (EKS) to manage your cluster, this role is known as the EKS Node IAM role. The Instance IAM role has IAM policies granting access to the ECR registries from which you are deploying. If the listed conditions are met, Red Hat Advanced Cluster Security for Kubernetes monitors deployments that pull from ECR registries and automatically generates ECR integrations for them. You can edit these integrations after they are automatically generated. 1.3. Manually configuring image registries If you are using GCR, you must manually create image registry integrations. 1.3.1. Manually configuring OpenShift Container Platform registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with OpenShift Container Platform built-in container image registry. Prerequisites You need a username and a password for authentication with the OpenShift Container Platform registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Generic Docker Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2. Manually configuring Amazon Elastic Container Registry You can use Red Hat Advanced Cluster Security for Kubernetes to create and modify Amazon Elastic Container Registry (ECR) integrations manually. If you are deploying from Amazon ECR, integrations for the Amazon ECR registries are usually automatically generated. However, you might want to create integrations on your own to scan images outside deployments. You can also modify the parameters of an automatically-generated integration. For example, you can change the authentication method used by an automatically-generated Amazon ECR integration to use AssumeRole authentication or other authorization models. Important To erase changes you made to an automatically-generated ECR integration, delete the integration, and Red Hat Advanced Cluster Security for Kubernetes creates a new integration for you with the automatically-generated parameters when you deploy images from Amazon ECR. Prerequisites You must have an Amazon Identity and Access Management (IAM) access key ID and a secret access key. Alternatively, you can use a node-level IAM proxy such as kiam or kube2iam . The access key must have read access to ECR. See How do I create an AWS access key? for more information. If you are running Red Hat Advanced Cluster Security for Kubernetes in Amazon Elastic Kubernetes Service (EKS) and want to integrate with an ECR from a separate Amazon account, you must first set a repository policy statement in your ECR. Follow the instructions at Setting a repository policy statement and for Actions , choose the following scopes of the Amazon ECR API operations: ecr:BatchCheckLayerAvailability ecr:BatchGetImage ecr:DescribeImages ecr:GetDownloadUrlForLayer ecr:ListImages Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New integration , or click one of the automatically-generated integrations to open it, then click Edit . Enter or modify the details for the following fields: Update stored credentials : Clear this box if you are modifying an integration without updating the credentials such as access keys and passwords. Integration name : The name of the integration. Registry ID : The ID of the registry. Endpoint : The address of the registry. This value is required only if you are using a private virtual private cloud (VPC) endpoint for Amazon ECR. This field is not enabled when the AssumeRole option is selected. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use Container IAM role . Otherwise, clear the Use Container IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole authentication, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2.1. Using assumerole with Amazon ECR You can use AssumeRole to grant access to AWS resources without manually configuring each user's permissions. Instead, you can define a role with the desired permissions so that the user is granted access to assume that role. AssumeRole enables you to grant, revoke, or otherwise generally manage more fine-grained permissions. 1.3.2.1.1. Configuring AssumeRole with container IAM Before you can use AssumeRole with Red Hat Advanced Cluster Security for Kubernetes, you must first configure it. Procedure Enable the IAM OIDC provider for your EKS cluster: USD eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve Create an IAM role for your EKS cluster. Associate the newly created role with a service account: USD kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name> Restart Central to apply the changes. USD kubectl -n stackrox delete pod -l app=central Assign the role to a policy that allows the role to assume another role as required: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Update the trust relationship for the role you want to assume: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:role/<role-name>" 1 ] }, "Action": "sts:AssumeRole" } ] } 1 The <role-name> should match with the new role you have created earlier. 1.3.2.1.2. Configuring AssumeRole without container IAM To use AssumeRole without container IAM, you must use an access and a secret key to authenticate as an AWS user with programmatic access . Procedure Depending on whether the AssumeRole user is in the same account as the ECR registry or in a different account, you must either: Create a new role with the desired permissions if the user for which you want to assume role is in the same account as the ECR registry. Note When creating the role, you can choose any trusted entity as required. However, you must modify it after creation. Or, you must provide permissions to access the ECR registry and define its trust relationship if the user is in a different account than the ECR registry: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Configure the trust relationship of the role by including the user ARN under the Principal field: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:user/<role-name>" ] }, "Action": "sts:AssumeRole" } ] } 1.3.2.1.3. Configuring AssumeRole in RHACS After configuring AssumeRole in ECR, you can integrate Red Hat Advanced Cluster Security for Kubernetes with Amazon Elastic Container Registry (ECR) by using AssumeRole. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New Integration . Enter the details for the following fields: Integration Name : The name of the integration. Registry ID : The ID of the registry. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use container IAM role . Otherwise, clear the Use custom IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.3. Manually configuring Google Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Container Registry (GCR). Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have access to the registry. See Configuring access control for information about granting users and other projects access to GCR. If you are using GCR Container Analysis , you must also grant the following roles to the service account: Container Analysis Notes Viewer Container Analysis Occurrences Viewer Storage Object Viewer Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Type : Select Registry . Registry Endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.4. Manually configuring Google Artifact Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Artifact Registry. Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have the Artifact Registry Reader Identity and Access Management (IAM) role roles/artifactregistry.reader . Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Artifact Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Registry endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.5. Manually configuring Microsoft Azure Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Microsoft Azure Container Registry. Prerequisites You have either an Azure managed or Azure workload identity. For more information about Azure managed identities, see What are managed identities for Azure resources? (Microsoft Azure documentation). For more information about Azure workload identities, see Workload identity federation (Microsoft Azure documentation). You have the Reader role for the identity over a scope that includes the container registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Microsoft Azure Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Optional: Select the Use workload identity checkbox, if you want to authenticate by using an Azure managed or workload identity. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.6. Manually configuring JFrog Artifactory You can integrate Red Hat Advanced Cluster Security for Kubernetes with JFrog Artifactory. Prerequisites You must have a username and a password for authentication with JFrog Artifactory. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select JFrog Artifactory . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.7. Manually configuring Quay Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes (RHACS) with Quay Container Registry. You can integrate with Quay by using the following methods: Integrating with the Quay public repository (registry): This method does not require authentication. Integrating with a Quay private registry by using a robot account: This method requires that you create a robot account to use with Quay (recommended). See the Quay documentation for more information. Integrating with Quay to use the Quay scanner rather than the RHACS scanner: This method uses the API and requires an OAuth token for authentication. See "Integrating with Quay Container Registry to scan images" in the "Additional Resources" section. Prerequisites For authentication with a Quay private registry, you need the credentials associated with a robot account or an OAuth token (deprecated). Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Quay.io . Click New integration . Enter the Integration name. Enter the Endpoint , or the address of the registry. If you are integrating with the Quay public repository, under Type , select Registry , and then go to the step. If you are integrating with a Quay private registry, under Type , select Registry and enter information in the following fields: Robot username : If you are accessing the registry by using a Quay robot account, enter the user name in the format <namespace>+<accountname> . Robot password : If you are accessing the registry by using a Quay robot account, enter the password for the robot account user name. OAuth token : If you are accessing the registry by using an OAuth token (deprecated), enter it in this field. Optional: If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Optional: To create the integration without testing, select Create integration without testing . Select Save . Note If you are editing a Quay integration but do not want to update your credentials, verify that Update stored credentials is not selected. Additional resources Integrating with Quay Container Registry to scan images 1.3.8. Manually configuring GitHub Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with GitHub Container Registry (GHCR). Prerequisites You need a GitHub account and personal access token with at least packages:read permissions. See Working with the Container registry for more information. Procedure In the RHACS portal, go to Platform Configuration Integrations . In the Image Integrations section, select GitHub Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username : The GitHub username. Leave blank for anonymous access. GitHub Token : The GitHub personal access token. Leave blank for anonymous access. Select Create integration without testing to create the integration without testing the connection to the registry. Enable this option to allow anonymous access. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.9. Manually configuring IBM Cloud Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with IBM Cloud Container Registry. Prerequisites You must have an API key for authentication with the IBM Cloud Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select IBM Cloud Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. API key . Select Test to test that the integration with the selected registry is working. Select Save . 1.3.10. Manually configuring Red Hat Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Red Hat Container Registry. Prerequisites You must have a username and a password for authentication with the Red Hat Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . | [
"eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve",
"kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name>",
"kubectl -n stackrox delete pod -l app=central",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role-name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:user/<role-name>\" ] }, \"Action\": \"sts:AssumeRole\" } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-image-registries |
23.4. Stacking I/O Parameters | 23.4. Stacking I/O Parameters All layers of the Linux I/O stack have been engineered to propagate the various I/O parameters up the stack. When a layer consumes an attribute or aggregates many devices, the layer must expose appropriate I/O parameters so that upper-layer devices or tools will have an accurate view of the storage as it transformed. Some practical examples are: Only one layer in the I/O stack should adjust for a non-zero alignment_offset ; once a layer adjusts accordingly, it will export a device with an alignment_offset of zero. A striped Device Mapper (DM) device created with LVM must export a minimum_io_size and optimal_io_size relative to the stripe count (number of disks) and user-provided chunk size. In Red Hat Enterprise Linux 6, Device Mapper and Software Raid (MD) device drivers can be used to arbitrarily combine devices with different I/O parameters. The kernel's block layer will attempt to reasonably combine the I/O parameters of the individual devices. The kernel will not prevent combining heterogeneous devices; however, be aware of the risks associated with doing so. For instance, a 512-byte device and a 4K device may be combined into a single logical DM device, which would have a logical_block_size of 4K. File systems layered on such a hybrid device assume that 4K will be written atomically, but in reality it will span 8 logical block addresses when issued to the 512-byte device. Using a 4K logical_block_size for the higher-level DM device increases potential for a partial write to the 512-byte device if there is a system crash. If combining the I/O parameters of multiple devices results in a conflict, the block layer may issue a warning that the device is susceptible to partial writes and/or is misaligned. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iolimitstacking |
Configuring SAP HANA Scale-Up Multitarget System Replication for disaster recovery | Configuring SAP HANA Scale-Up Multitarget System Replication for disaster recovery Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/index |
2.6. File System Backups | 2.6. File System Backups It is important to make regular backups of your GFS2 file system in case of emergency, regardless of the size of your file system. Many system administrators feel safe because they are protected by RAID, multipath, mirroring, snapshots, and other forms of redundancy, but there is no such thing as safe enough. It can be a problem to create a backup since the process of backing up a node or set of nodes usually involves reading the entire file system in sequence. If this is done from a single node, that node will retain all the information in cache until other nodes in the cluster start requesting locks. Running this type of backup program while the cluster is in operation will negatively impact performance. Dropping the caches once the backup is complete reduces the time required by other nodes to regain ownership of their cluster locks/caches. This is still not ideal, however, because the other nodes will have stopped caching the data that they were caching before the backup process began. You can drop caches using the following command after the backup is complete: It is faster if each node in the cluster backs up its own files so that the task is split between the nodes. You might be able to accomplish this with a script that uses the rsync command on node-specific directories. The best way to make a GFS2 backup is to create a hardware snapshot on the SAN, present the snapshot to another system, and back it up there. The backup system should mount the snapshot with -o lockproto=lock_nolock since it will not be in a cluster. | [
"echo -n 3 > /proc/sys/vm/drop_caches"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-backups-gfs2 |
Chapter 16. MapReduce | Chapter 16. MapReduce Important Map/Reduce has been deprecated in JBoss Data Grid 6.6.0, and is expected to be removed in subsequent versions. This feature will be replaced by Distributed Streams, which was shown to have better performance. The Red Hat JBoss Data Grid MapReduce model is an adaptation of Google 's MapReduce model. MapReduce is a programming model used to process and generate large data sets. It is typically used in distributed computing environments where nodes are clustered. In JBoss Data Grid, MapReduce allows transparent distributed processing of large amounts of data across the grid. It does this by performing computations locally where the data is stored whenever possible. MapReduce uses the two distinct computational phases of map and reduce to process information requests through the data grid. The process occurs as follows: The user initiates a task on a cache instance, which runs on a cluster node (the master node). The master node receives the task input, divides the task, and sends tasks for map phase execution on the grid. Each node executes a Mapper function on its input, and returns intermediate results back to the master node. If the useIntermediateSharedCache parameter is set to "true" , the map results are inserted in an intermediary cache, rather than being returned to the master node. If a Combiner has been specified with task.combinedWith(combiner) , the Combiner is called on the Mapper results and the combiner's results are returned to the master node or inserted in the intermediary cache. Note Combiners are not required but can only be used when the function is both commutative (changing the order of the operands does not change the results) and associative (the order in which the operations are performed does not matter as long as the sequence of the operands is not changed). Combiners are advantageous to use because they can improve the speeds of MapReduceTask executions. The master node collects all intermediate results from the map phase and merges all intermediate values associated with the same intermediate key. If the distributedReducePhase parameter is set to true , the merging of the intermediate values is done on each node, as the Mapper or Combiner results are inserted in the intermediary cache.The master node only receives the intermediate keys. The master node sends intermediate key/value pairs for reduction on the grid. If the distributedReducePhase parameter is set to "false" , the reduction phase is executed only on the master node. The final results of the reduction phase are returned. Optionally specify the target cache for the results using the instructions in Section 16.1.2, "Specify the Target Cache" . If the distributedReducePhase parameter is set to "true" , the master node running the task receives all results from the reduction phase and returns the final result to the MapReduce task initiator. If no target cache is specified and no collator is specified (using task.execute(Collator) ), the result map is returned to the master node. Report a bug 16.1. The MapReduce API In Red Hat JBoss Data Grid, each MapReduce task has five main components: Mapper Reducer Collator MapReduceTask Combiners The Mapper class implementation is a component of MapReduceTask , which is invoked once per input cache entry key/value pair. Map is a process of applying a given function to each element of a list, returning a list of results. Each node in the JBoss Data Grid executes the Mapper on a given cache entry key/value input pair. It then transforms this cache entry key/value pair into an intermediate key/value pair, which is emitted into the provided Collector instance. Note The MapReduceTask requires a Mapper and a Reducer but using a Collator or Combiner is optional. Example 16.1. Executing the Mapper At this stage, for each output key there may be multiple output values. The multiple values must be reduced to a single value, and this is the task of the Reducer . JBoss Data Grid's distributed execution environment creates one instance of Reducer per execution node. Example 16.2. Reducer The same Reducer interface is used for Combiners . A Combiner is similar to a Reducer , except that it must be able to work on partial results. The Combiner is executed on the results of the Mapper , on the same node, without considering the other nodes that might have generated values for the same intermediate key. Note Combiners are not required but can only be used when the function is both commutative (changing the order of the operands does not change the results) and associative (the order in which the operations are performed does not matter as long as the sequence of the operands is not changed). Combiners are advantageous to use because they can improve the speeds of MapReduceTask executions. As Combiners only see a part of the intermediate values, they cannot be used in all scenarios, however when used they can reduce network traffic and memory consumption in the intermediate cache significantly. The Collator coordinates results from Reducers that have been executed on JBoss Data Grid, and assembles a final result that is delivered to the initiator of the MapReduceTask . The Collator is applied to the final map key/value result of MapReduceTask . Example 16.3. Assembling the Result Report a bug 16.1.1. MapReduceTask In Red Hat JBoss Data Grid, MapReduceTask is a distributed task, which unifies the Mapper , Combiner , Reducer , and Collator components into a cohesive computation, which can be parallelized and executed across a large-scale cluster. These components can be specified with a fluent API. However,as most of them are serialized and executed on other nodes, using inner classes is not recommended. Example 16.4. Specifying MapReduceTask Components MapReduceTask requires a cache containing data that will be used as input for the task. The JBoss Data Grid execution environment will instantiate and migrate instances of provided Mappers and Reducers seamlessly across the nodes. By default, all available key/value pairs of a specified cache will be used as input data for the task. This can be modified by using the onKeys method as an input key filter. There are two MapReduceTask constructor parameters that determine how the intermediate values are processed: distributedReducePhase - When set to false , the default setting, the reducers are only executed on the master node. If set to true , the reducers are executed on every node in the cluster. useIntermediateSharedCache - Only important if distributedReducePhase is set to true . If true , which is the default setting, this task will share intermediate value cache with other executing MapReduceTasks on the grid. If set to false , this task will use its own dedicated cache for intermediate values. Note The default timeout for MapReduceTask is 0 (zero). That is, the task will wait indefinitely for its completion by default. Report a bug 16.1.2. Specify the Target Cache Red Hat JBoss Data Grid's MapReduce implementation allows users to specify a target cache to store the results of an executed task. The results are available after the execute method (which is synchronous) is complete. This variant of the execute method prevents the master JVM node from exceeding its allows maximum heap size. This is especially relevant if objects that are the results of the reduce phase have a large memory footprint or if multiple MapReduceTasks are concurrently executing on the master task node. Use the following method of MapReduceTask to specify a Cache object to store the results: Use the following method of MapReduceTask to specify a name for the target cache: Report a bug 16.1.3. Mapper and CDI The Mapper is invoked with appropriate input key/value pairs on an executing node, however Red Hat JBoss Data Grid also provides a CDI injection for an input cache. The CDI injection can be used where additional data from the input cache is required in order to complete map transformation. When the Mapper is executed on a JBoss Data Grid executing node, the JBoss Data Grid CDI module provides an appropriate cache reference, which is injected to the executing Mapper . To use the JBoss Data Grid CDI module with Mapper : Declare a cache field in Mapper . Annotate the cache field Mapper with @org.infinispan.cdi.Input . Annotate with mandatory @Inject annotation . Example 16.5. Using a CDI Injection Report a bug | [
"public interface Mapper<KIn, VIn, KOut, VOut> extends Serializable { /** * Invoked once for each input cache entry KIn,VOut pair. */ void map(KIn key, VIn value, Collector<KOut, VOut> collector);",
"public interface Reducer<KOut, VOut> extends Serializable { /** * Combines/reduces all intermediate values for a particular intermediate key to a single value. * <p> * */ VOut reduce(KOut reducedKey, Iterator<VOut> iter); }",
"public interface Collator<KOut, VOut, R> { /** * Collates all reduced results and returns R to invoker of distributed task. * * @return final result of distributed task computation */ R collate(Map<KOut, VOut> reducedResults);",
"new MapReduceTask(cache) .mappedWith(new MyMapper()) .combinedWith(new MyCombiner()) .reducedWith(new MyReducer()) .execute(new MyCollator());",
"public void execute(Cache<KOut, VOut> resultsCache) throws CacheException",
"public void execute(String resultsCache) throws CacheException",
"public class WordCountCacheInjecterMapper implements Mapper<String, String, String, Integer> { @Inject @Input private Cache<String, String> cache; @Override public void map(String key, String value, Collector<String, Integer> collector) { //use injected cache if needed StringTokenizer tokens = new StringTokenizer(value); while (tokens.hasMoreElements()) { for(String token : value.split(\"\\\\w\")) { collector.emit(token, 1); } } } }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-MapReduce |
6.2. Red Hat Virtualization 4.4 SP 1 Batch Update 2 (ovirt-4.5.2) | 6.2. Red Hat Virtualization 4.4 SP 1 Batch Update 2 (ovirt-4.5.2) 6.2.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1853924 Previously, when attempting to add a disk using ovirt-engine SDK script when the disk already exists, the operation fails, and an exception is thrown. With this release, the Add Disk Functionality checks for duplicate disks, and fails gracefully with a readable error message when the disk to be inserted already exists. BZ# 1955388 Previously, the Manager was able to start a virtual machine with a Resize and Pin NUMA policy on a host whose physical sockets did not correspond to the number of NUMA nodes. As a result, the wrong pinning was assigned to the policy. With this release, the Manager does not allow the virtual machine to be scheduled on such a host, making the pinning correct based on the algorithm. BZ# 2081676 Previously, when two mutually exclusive sos report options were used in the ovirt-log-collector, the log size limit was ignored. In this release, the limit on the size of the log per plugin works as expected. BZ# 2097558 Previously, running engine-setup did not always renew OVN certificates when they were close to expiration or expired. With this release, OVN certificates are always renewed by engine-setup when needed. BZ# 2097725 Previously, the Manager issued warnings about approaching certificate expiration before engine-setup could update the certificates. In this release the expiration warning and certificate update periods are aligned, and certificates are updated as soon as the warnings about their upcoming expiration occur. BZ# 2101481 The handling of core dumps during upgrade from Red Hat Virtualization versions to RHV 4.4 SP1 batch 1 has been fixed. BZ# 2104115 Previously, when importing a virtual machine with manual CPU pinning (pinned to a dedicated host), the manual pinning string was cleared, but the CPU pinning policy was not set to NONE. As a result, importing failed. In this release, the CPU pinning policy is set to NONE if the CPU pinning string is cleared, and importing succeeds. BZ# 2105781 The hosted-engine-ha binaries have been moved from /usr/share to /usr/libexec. As a result, the hosted-engine --clean-metadata command fails. With this release, you must use the new path for the command to succeed: /usr/libexec/ovirt-hosted-engine-ha/ovirt-ha-agent BZ# 2109923 Previously, it was not possible to import templates from the Administration Portal. With this release, importing templates from the Administration Portal is now possible. 6.2.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1793207 A new warning has been added to the vdsm-tool to protect users from using the unsupported user_friendly_names multipath configuration. The following is an example of the output: BZ# 2097536 In this release, the rhv-log-collector-analyzer now provides a detailed output for each problematic image, including disk names, associated virtual machine, the host running the virtual machine, snapshots, and the current Storage Pool Manager. This makes it easier to identify problematic virtual machines and collect SOS reports for related systems. The detailed view is now the default, and the compact option can be set by using the --compact switch in the command line. BZ# 2097560 Expiration of ovirt-provider-ovn certificate is now checked regularly along with other RHV certificates (engine CA, engine, or hypervisors) and if ovirt-provider-ovn is going to expire or has expired, the warning or alert is raised to the audit log. To renew the ovirt-provider-ovn certificate, run engine-setup. If your ovirt-provider-ovn certificate expires on a RHV version, you must upgrade to RHV 4.4 SP1 batch 2 or newer, and the ovirt-provider-ovn certificate will be renewed automatically as part of engine-setup. BZ# 2104939 With this release, OVA export or import works on hosts with a non-standard SSH port. BZ# 2107250 With this release, the process to check certificate validity is now compatible with both RHEL 8 and RHEL 7 based hypervisors. 6.2.3. Rebase: Bug Fixes and Enhancements These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization: BZ# 2092478 UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See https://github.com/pingidentity/ldapsdk/releases for changes since version 4.0.14 6.2.4. Rebase: Bug Fixes Only These items are rebases of bug fixes included in this release of Red Hat Virtualization: BZ# 2104831 Rebase package(s) to version: 4.4.7. Highlights, important fixes, or notable enhancements: fixed BZ#2081676 6.2.5. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 2049286 With this release, only virtual machines pinned to hosts selected for upgrade are stopped during cluster upgrade. VMs pinned to hosts that are not selected for upgrade are not stopped. BZ# 2108985 RHV 4.4 SP1 and later is only supported on RHEL 8.6, so you cannot use RHEL 8.7 or later, and must stay with RHEL 8.6 EUS. BZ# 2113068 With this release, permissions for the /var/log/ovn directory are updated correctly during the upgrade of OVS/OVN 2.11 to OVS 2.15/OVN 2021. 6.2.6. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 2111600 ovirt-engine-extension-aaa-jdbc and ovirt-engine-extension-aaa-ldap are deprecated in RHV 4.4 SP1. They remain in the RHV product, but for any new request, you should use integration with Red Hat Single Sign-On as described in https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/administration_guide/index#Configuring_Red_Hat_SSO | [
"vdsm-tool is-configured --module multipath WARNING: Invalid configuration: 'user_friendly_names' is enabled in multipath configuration: section1 { key1 value1 user_friendly_names yes key2 value2 } section2 { user_friendly_names yes } This configuration is not supported and may lead to storage domain corruption."
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_sp_1_batch_update_2_ovirt_4_5_2 |
Chapter 11. Troubleshooting client access to services in the other forest | Chapter 11. Troubleshooting client access to services in the other forest After configuring a trust between your Identity Management (IdM) and Active Directory (AD) environments, you might experience issues where a client in one domain is not able to access a service in the other domain. Use the following diagrams to troubleshoot the issue. 11.1. Flow of information when a host in the AD forest root domain requests services from an IdM server The following diagram explains the flow of information when an Active Directory (AD) client requests a service in the Identity Management (IdM) domain. If you have trouble accessing IdM services from AD clients, you can use this information to narrow your troubleshooting efforts and identify the source of the issue. The AD client contacts the AD Kerberos Distribution Center (KDC) to perform a TGS Request for the service in the IdM domain. The AD KDC recognizes that the service belongs to the trusted IdM domain. The AD KDC sends the client a cross-realm ticket-granting ticket (TGT), along with a referral to the trusted IdM KDC. The AD client uses the cross-realm TGT to request a ticket to the IdM KDC. The IdM KDC validates the Privileged Attribute Certificate (MS-PAC) that is transmitted with the cross-realm TGT. The IPA-KDB plugin might check the LDAP directory to see if foreign principals are allowed to get tickets for the requested service. The IPA-KDB plugin decodes the MS-PAC, verifies, and filters the data. It performs lookups in the LDAP server to check if it needs to augment the MS-PAC with additional information, such as local groups. The IPA-KDB plugin then encodes the PAC, signs it, attaches it to the service ticket, and sends it to the AD client. The AD client can now contact the IdM service using the service ticket issued by IdM KDC. 11.2. Flow of information when a host in an AD child domain requests services from an IdM server The following diagram explains the flow of information when an Active Directory (AD) host in a child domain requests a service in the Identity Management (IdM) domain. In this scenario, the AD client contacts the Kerberos Distribution Center (KDC) in the child domain, then contacts the KDC in the AD forest root, and finally contacts the IdM KDC to request access to the IdM service. If you have trouble accessing IdM services from AD clients, and your AD client belongs to a domain that is a child domain of an AD forest root, you can use this information to narrow your troubleshooting efforts and identify the source of the issue. The AD client contacts the AD Kerberos Distribution Center (KDC) in its own domain to perform a TGS Request for the service in the IdM domain. The AD KDC in child.ad.example.com , the child domain, recognizes that the service belongs to the trusted IdM domain. The AD KDC in the child domain sends the client a referral ticket for the AD forest root domain ad.example.com . The AD client contacts the KDC in the AD forest root domain for the service in the IdM domain. The KDC in the forest root domain recognizes that the service belongs to the trusted IdM domain. The AD KDC sends the client a cross-realm ticket-granting ticket (TGT), along with a referral to the trusted IdM KDC. The AD client uses the cross-realm TGT to request a ticket to the IdM KDC. The IdM KDC validates the Privileged Attribute Certificate (MS-PAC) that is transmitted with the cross-realm TGT. The IPA-KDB plugin might check the LDAP directory to see if foreign principals are allowed to get tickets for the requested service. The IPA-KDB plugin decodes the MS-PAC, verifies, and filters the data. It performs lookups in the LDAP server to check if it needs to augment the MS-PAC with additional information, such as local groups. The IPA-KDB plugin then encodes the PAC, signs it, attaches it to the service ticket, and sends it to the AD client. The AD client can now contact the IdM service using the service ticket issued by IdM KDC. 11.3. Flow of information when an IdM client requests services from an AD server The following diagram explains the flow of information when an Identity Management (IdM) client requests a service in the Active Directory (AD) domain when you have configured a two-way trust between IdM and AD. If you have trouble accessing AD services from IdM clients, you can use this information to narrow your troubleshooting efforts and identify the source of the issue. Note By default, IdM establishes a one-way trust to AD, which means it is not possible to issue cross-realm ticket-granting ticket (TGT) for resources in an AD forest. To be able to request tickets to services from trusted AD domains, configure a two-way trust. The IdM client requests a ticket-granting ticket (TGT) from the IdM Kerberos Distribution Center (KDC) for the AD service it wants to contact. The IdM KDC recognizes that the service belongs to the AD realm, verifies that the realm is known and trusted, and that the client is allowed to request services from that realm. Using information from the IdM Directory Server about the user principal, the IdM KDC creates a cross-realm TGT with a Privileged Attribute Certificate (MS-PAC) record about the user principal. The IdM KDC sends back a cross-realm TGT to the IdM client. The IdM client contacts the AD KDC to request a ticket for the AD service, presenting the cross-realm TGT that contains the MS-PAC provided by the IdM KDC. The AD server validates and filters the PAC, and returns a ticket for the AD service. The IPA client can now contact the AD service. Additional resources One-way trusts and two-way trusts | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/assembly_troubleshooting-client-access-to-services-in-the-other-forest_installing-trust-between-idm-and-ad |
Chapter 42. Hardware Enablement | Chapter 42. Hardware Enablement Trusted Computing Group TPM 2.0 System API library and management utilities available Two new packages have been added to Red Hat Enterprise Linux to support the Trusted Computing Group's Trusted Platform Module (TPM) 2.0 hardware as a Technology Preview: The tpm2-tss package adds the Intel implementation of the TPM 2.0 System API library. This library enables programs to interact with TPM 2.0 devices. The tpm2-tools package adds a set of utilities for management and utilization of TPM 2.0 devices from user space. (BZ# 1275027 , BZ#1275029) New package: tss2 The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview. This package allows users to interact with TPM 2.0 devices. (BZ#1384452) LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI. (BZ#1062759) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_hardware_enablement |
Chapter 7. Adding Storage for Red Hat Virtualization | Chapter 7. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Important If you are using iSCSI storage, new data domains must not use the same iSCSI target as the self-hosted engine storage domain. Warning Creating additional data domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. 7.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 7.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally for the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Important If you use the REST API method discoveriscsi to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API method iscsilogin . See discoveriscsi in the REST API Guide for more information. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Important When using the REST API iscsilogin method to log in, you must use the iscsi details from the discovered targets results in the discoveriscsi method. See iscsilogin in the REST API Guide for more information. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 7.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 7.4. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Adding_Storage_Domains_to_RHV_SHE_cli_deploy |
Chapter 8. Removing failed or unwanted Ceph Object Storage devices | Chapter 8. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 8.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 8.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of cluster is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 8.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned using local storage devices by following the steps in the procedure. Important Scaling down of cluster is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated to the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 8.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the OSD removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigated to ensure they are active. | [
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc",
"oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>",
"failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc delete deployment rook-ceph-osd-<osd-id>",
"oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc",
"oc get -n openshift-storage pvc <pvc-name>",
"oc get pv <pv-name-from-above-command> -oyaml | grep path",
"oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted",
"oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>",
"oc delete -n openshift-storage pvc <pvc-name-from-step-a>",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock/",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock",
"rm /mnt/local-storage/localblock/<failed-device-name>",
"oc delete pv <pv-name>",
"#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/troubleshooting_openshift_data_foundation/removing-failed-or-unwanted-ceph-object-storage-devices_rhodf |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 1-1.37 Thu Jan 21 2016 Lenka Spackova Added information about the removed systemtap-grapher package to the Deprecated Functionality Chapter. Revision 1-1.36 Wed Jun 04 2014 Miroslav Svoboda Republished to include the latest description changes in the RHSA-2014:0634 kernel advisory. Revision 1-1.33 Wed Mar 12 2014 Miroslav Svoboda Republished to include the latest description changes in the RHSA-2014:0284 kernel advisory. Revision 1-1.32 Fri Jan 24 2014 Milan Navratil Republished to include Section 7.103.14, "RHBA-2013:0093 - kernel bug fix update" . Revision 1-1.30 Tue Dec 02 2013 Miroslav Svoboda Republished to include the latest description changes in the RHBA-2013-1770 kernel advisory. Revision 1-1.29 Fri Nov 29 2013 Miroslav Svoboda Republished to include Section 7.103.4, " RHBA-2013:1770 - kernel bug fix and enhancement update " and several other z-stream errata. Revision 1-1.26 Wed Oct 16 2013 Miroslav Svoboda Republished to include Section 7.103.5, " RHSA-2013:1436 - Moderate: kernel security and bug fix update " . Revision 1-1.25 Wed Aug 28 2013 Miroslav Svoboda Republished to include Section 7.103.6, " RHSA-2013:1173 - Important: kernel security and bug fix update " . Revision 1-1.22 Tue Jul 23 2013 Miroslav Svoboda Republished to include Section 7.103.7, " RHSA-2013:1051 - Moderate: kernel security and bug fix update " . Revision 1-1.21 Tue Jun 25 2013 Eliska Slobodova Republished to include a samba4 known issue. Revision 1-1.20 Tue Jun 11 2013 Miroslav Svoboda Republished to include Section 7.103.8, " RHSA-2013:0911 - Important: kernel security, bug fix and enhancement update " . Revision 1-1.17 Fri May 24 2013 Eliska Slobodova Removed the numad package from Technology Previews as it is now fully supported. Revision 1-1.15 Fri Apr 26 2013 Eliska Slobodova Republished the book to include a known issue. Revision 1-1.13 Fri Mar 22 2013 Miroslav Svoboda Republished to include Section 7.103.9, " RHSA-2013:0744 - Important: kernel security and bug fix update " . Revision 1-1.11 Fri Mar 22 2013 Martin Prpic Republished to include Section 7.128.1, " RHBA-2013:0664 - libvirt bug fix and enhancement update " . Revision 1-1.10 Tue Mar 12 2013 Eliska Slobodova Republished the book to include the RHSA-2013:0630 kernel advisory and a new known issue, BZ# 918647 . Revision 1-1.2 Mon Feb 25 2013 Martin Prpic Fixed incorrect lpfc driver version: BZ# 915284 . Revision 1-1.1 Thu Feb 21 2013 Eliska Slobodova Release of the Red Hat Enterprise Linux 6.4 Technical Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/appe-technical_notes-revision_history |
Chapter 2. Configuring an Azure account | Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 2.5. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 2.5.1. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5.2. Required Azure permissions for installer-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.11. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.12. Optional permissions for installing a cluster using the NatGateway outbound type Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.13. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.14. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.15. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.16. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.17. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.18. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.19. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.20. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.21. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.22. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.5.3. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 2.5.4. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources About the Cloud Credential Operator 2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options. | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id>",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-account |
11.5. Deploying ACME Responder | 11.5. Deploying ACME Responder Once you have configured the ACME responder, deploy it using the following command: This creates a deployment descriptor at /etc/pki/pki-tomcat/Catalina/localhost/acme.xml . The PKI server starts the ACME responder automatically after a few seconds, you do not need to restart the server. To verify that the ACME responder is running, use the following command: For more information, see the pki-server-acme manpage. | [
"pki-server acme-deploy",
"curl -s -k https://USDHOSTNAME:8443/acme/directory | python -m json.tool { \"meta\": { \"caaIdentities\": [ \"example.com\" ], \"externalAccountRequired\": false, \"termsOfService\": \"https://example.com/acme/tos.pdf\", \"website\": \"https://www.example.com\" }, \"newAccount\": \"https://<hostname>:8443/acme/new-account\", \"newNonce\": \"https://<hostname>:8443/acme/new-nonce\", \"newOrder\": \"https://<hostname>:8443/acme/new-order\", \"revokeCert\": \"https://<hostname>:8443/acme/revoke-cert\" }"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/deploying_acme |
Chapter 16. Troubleshooting IdM client installation | Chapter 16. Troubleshooting IdM client installation The following sections describe how to gather information about a failing IdM client installation, and how to resolve common installation issues. 16.1. Reviewing IdM client installation errors When you install an Identity Management (IdM) client, debugging information is appended to /var/log/ipaclient-install.log . If a client installation fails, the installer logs the failure and rolls back changes to undo any modifications to the host. The reason for the installation failure may not be at the end of the log file, as the installer also logs the roll back procedure. To troubleshoot a failing IdM client installation, review lines labeled ScriptError in the /var/log/ipaclient-install.log file and use this information to resolve any corresponding issues. Prerequisites You must have root privileges to display the contents of IdM log files. Procedure Use the grep utility to retrieve any occurrences of the keyword ScriptError from the /var/log/ipaserver-install.log file. To review a log file interactively, open the end of the log file using the less utility and use the ^ and v arrow keys to navigate. Additional resources If you are unable to resolve a failing IdM client installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 16.2. Resolving issues if the client installation fails to update DNS records The IdM client installer issues nsupdate commands to create PTR, SSHFP, and additional DNS records. However, the installation process fails if the client is unable to update DNS records after installing and configuring the client software. To fix this problem, verify the configuration and review DNS errors in /var/log/client-install.log . Prerequisites You are using IdM DNS as the DNS solution for your IdM environment Procedure Ensure that dynamic updates for the DNS zone the client is in are enabled: Ensure that the IdM server running the DNS service has port 53 opened for both TCP and UDP protocols. Use the grep utility to retrieve the contents of nsupdate commands from /var/log/client-install.log to see which DNS record updates are failing. Additional resources If you are unable to resolve a failing installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 16.3. Resolving issues if the client installation fails to join the IdM Kerberos realm The IdM client installation process fails if the client is unable to join the IdM Kerberos realm. This failure can be caused by an empty Kerberos keytab. Prerequisites Removing system files requires root privileges. Procedure Remove /etc/krb5.keytab . Retry the IdM client installation. Additional resources If you are unable to resolve a failing installation, and you have a Red Hat Technical Support subscription, open a Technical Support case at the Red Hat Customer Portal and provide an sosreport of the client. The sosreport utility collects configuration details, logs and system information from a RHEL system. For more information about the sosreport utility, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . 16.4. Resolving issues if the client installation fails to configure automount In RHEL 7, you could configure an automount location for your client during the client installation. In RHEL 8, running the ipa-client-install command with the --automount-location <raleigh> fails to configure the automount location. However, as the rest of the installation is successful, running /usr/sbin/ipa-client-automount <raleigh> after the installation configures an automount location for the client correctly. Prerequisites With the exception of configuring an automount location, the IdM client installation proceeded correctly. The CLI output was: Procedure Configure the automount location: Additional resources man ipa-client-automount 16.5. Additional resources To troubleshoot installing the first IdM server, see Troubleshooting IdM server installation . To troubleshoot installing an IdM replica, see Troubleshooting IdM replica installation . | [
"[user@server ~]USD sudo grep ScriptError /var/log/ipaclient-install.log [sudo] password for user: 2020-05-28T18:24:50Z DEBUG The ipa-client-install command failed, exception: ScriptError : One of password / principal / keytab is required.",
"[user@server ~]USD sudo less -N +G /var/log/ipaclient-install.log",
"[user@server ~]USD ipa dnszone-mod idm.example.com. --dynamic-update=TRUE",
"[user@server ~]USD sudo firewall-cmd --permanent --add-port=53/tcp --add-port=53/udp [sudo] password for user: success [user@server ~]USD firewall-cmd --runtime-to-permanent success",
"[user@server ~]USD sudo grep nsupdate /var/log/ipaclient-install.log",
"Joining realm failed: Failed to add key to the keytab child exited with 11 Installation failed. Rolling back changes.",
"[user@client ~]USD sudo rm /etc/krb5.keytab [sudo] password for user: [user@client ~]USD ls /etc/krb5.keytab ls: cannot access '/etc/krb5.keytab': No such file or directory",
"The ipa-client-install command was successful.",
"/usr/sbin/ipa-client-automount -U --location <raleigh>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/troubleshooting-idm-client-installation_installing-identity-management |
Chapter 1. Upgrading a self-hosted engine environment | Chapter 1. Upgrading a self-hosted engine environment 1.1. Upgrading a self-Hosted engine from Red Hat Virtualization 4.3 to 4.4 Upgrading a self-hosted engine environment from version 4.3 to 4.4 involves the following steps: Upgrade Considerations When planning to upgrade, see Red Hat Virtualization 4.4 upgrade considerations and known issues . When upgrading from Open Virtual Network (OVN) and Open vSwitch (OvS) 2.11 to OVN 2021 and OvS 2.15, the process is transparent to the user as long as the following conditions are met: The Manager is upgraded first. The ovirt-provider-ovn security groups must be disabled, before the host upgrade, for all OVN networks that are expected to work between hosts with OVN/OvS version 2.11. The hosts are upgraded to match OVN version 2021 or higher and OvS version 2.15. You must complete this step in the Administration Portal, so you can properly reconfigure OVN and refresh the certificates. The host is rebooted after an upgrade. Note To verify whether the provider and OVN were configured successfully on the host, check the OVN configured flag on the General tab for the host. If the OVN Configured is set to No , click Management Refresh Capabilities . This setting is also available in the REST API. If refreshing the capabilities fails, you can configure OVN by reinstalling the host from Manager 4.4 or higher. Make sure you meet the prerequisites, including enabling the correct repositories Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade Migrate any virtual machines that are running on the same host as the Manager virtual machine to another host in the same cluster Place the environment in global maintenance mode Update the 4.3 Manager to the latest version of 4.3 Upgrade the Manager from 4.3 to 4.4 Upgrade the self-hosted engine nodes, and any standard hosts, while reducing virtual machine downtime (Optional) Upgrade RHVH while preserving local storage Update the compatibility version of the clusters Reboot any running or suspended virtual machines to update their configuration Update the compatibility version of the data centers 1.1.1. Prerequisites Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes. Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide . When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. 1.1.2. Analyzing the Environment It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them. 1.1.3. Log Collection Analysis tool Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file. Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Install the Log Collection Analysis tool on the Manager machine: Run the tool: A detailed report is displayed. By default, the report is saved to a file called analyzer_report.html . To save the file to a specific location, use the --html flag and specify the location: # rhv-log-collector-analyzer --live --html=/ directory / filename .html You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser: Launch ELinks and open analyzer_report.html . To navigate the report, use the following commands in ELinks: Insert to scroll up Delete to scroll down PageUp to page up PageDown to page down Left Bracket to scroll left Right Bracket to scroll right 1.1.3.1. Monitoring snapshot health with the image discrepancies tool The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as: Before upgrading versions, to avoid carrying over broken volumes or chains to the new version. Following a failed storage operation, to detect volumes or attributes in a bad state. After restoring the RHV database or storage from backup. Periodically, to detect potential problems before they worsen. To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems. Prerequisites Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev . Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process. Procedure To run the tool, enter the following command on the RHV Manager: If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running. Note This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database. Understanding the results The tool reports the following: If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage. If some volume attributes differ between the storage and the database. Sample output: 1.1.4. Migrating virtual machines from the self-hosted engine host Only the Manager virtual machine should remain on the host until after you have finished upgrading the host. Migrate any virtual machines other than the Manager virtual machine to another host in the same cluster. You can use Live Migration to minimize virtual machine down-time. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide for more information. 1.1.5. Enabling global maintenance mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: # hosted-engine --set-maintenance --mode=global Confirm that the environment is in global maintenance mode before proceeding: # hosted-engine --vm-status You should see a message indicating that the cluster is in global maintenance mode. You can now update the Manager to the latest version of 4.3. 1.1.6. Updating the Red Hat Virtualization Manager Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. You can now upgrade the Manager to 4.4. 1.1.7. Upgrading the Red Hat Virtualization Manager from 4.3 to 4.4 The Red Hat Virtualization Manager 4.4 is only supported on Red Hat Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host on the self-hosted engine host, even if you are using the same physical machine that you use to run the RHV 4.3 self-hosted engine. The upgrade process requires restoring Red Hat Virtualization Manager 4.3 backup files onto the Red Hat Virtualization Manager 4.4 virtual machine. Prerequisites All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3. All virtual machines in the environment must have the cluster compatibility level set to version 4.3. Make note of the MAC address of the self-hosted engine if you are using DHCP and want to use the same IP address. The deploy script prompts you for this information. During the deployment you need to provide a new storage domain for the Manager machine. The deployment script renames the 4.3 storage domain and retains its data to enable disaster recovery. Set the cluster scheduling policy to cluster_maintenance in order to prevent automatic virtual machine migration during the upgrade. Caution In an environment with multiple highly available self-hosted engine nodes, you need to detach the storage domain hosting the version 4.3 Manager after upgrading the Manager to 4.4. Use a dedicated storage domain for the 4.4 self-hosted engine deployment. If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide . The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information. Note Connected hosts and virtual machines can continue to work while the Manager is being upgraded. Procedure Log in to the Manager virtual machine and shut down the engine service. # systemctl stop ovirt-engine Back up the Red Hat Virtualization Manager 4.3 environment. # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log Copy the backup file to a storage device outside of the RHV environment. Shut down the self-hosted engine. # shutdown Note If you want to reuse the self-hosted engine virtual machine to deploy the Red Hat Virtualization Manager 4.4, note the MAC address of the self-hosted engine network interface before you shut it down. Make sure that the self-hosted engine is shut down. # hosted-engine --vm-status | grep -E 'Engine status|Hostname' Note If any of the hosts report the detail field as Up , log in to that specific host and shut it down with the hosted-engine --vm-shutdown command. Install RHVH 4.4 or Red Hat Enterprise Linux 8.6 on the existing node currently running the Manager virtual machine to use it as the self-hosted engine deployment host. See Installing the Self-hosted Engine Deployment Host for more information. Note It is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. Install the self-hosted engine deployment tool. # yum install ovirt-hosted-engine-setup Copy the backup file to the host. Log in to the Manager host and deploy the self-hosted engine with the backup file: # hosted-engine --deploy --restore-from-file=/ path /backup.bck Note tmux enables the deployment script to continue if the connection to the server is interrupted, so you can reconnect and attach to the deployment and continue. Otherwise, if the connection is interrupted during deployment, the deployment fails. To run the deployment script using tmux , enter the tmux command before you run the deployment script: # tmux # hosted-engine --deploy --restore-from-file=backup.bck The deployment script automatically disables global maintenance mode and calls the HA agent to start the self-hosted engine virtual machine. The upgraded host with the 4.4 self-hosted engine reports that HA mode is active, but the other hosts report that global maintenance mode is still enabled as they are still connected to the old self-hosted engine storage. Detach the storage domain that hosts the Manager 4.3 machine. For details, see Detaching a Storage Domain from a Data Center in the Administration Guide . Log in to the Manager virtual machine and shut down the engine service. # systemctl stop ovirt-engine Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.4. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Install optional extension packages if they were installed on the Red Hat Virtualization Manager 4.3 machine. # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Note The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process. Configure the Manager by running the engine-setup command: # engine-setup The Red Hat Virtualization Manager 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version. Additional resources Installing Red Hat Virtualization as a self-hosted engine using the command line You can now update the self-hosted engine nodes, and then any standard hosts. The procedure is the same for both host types. 1.1.8. Migrating hosts and virtual machines from RHV 4.3 to 4.4 You can migrate hosts and virtual machines from Red Hat Virtualization 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment. This process requires migrating all virtual machines from one host so as to make that host available to upgrade to RHV 4.4. After the upgrade, you can reattach the host to the Manager. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Note CPU-passthrough virtual machines might not migrate properly from RHV 4.3 to RHV 4.4. RHV 4.3 and RHV 4.4 are based on RHEL 7 and RHEL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines. Prerequisites Hosts for RHV 4.4 require Red Hat Enterprise Linux versions 8.2 to 8.6. A clean installation of Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host 4.4 is required, even if you are using the same physical machine that you use to run hosts for RHV 4.3. Red Hat Virtualization Manager 4.4 is installed and running. The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure. Procedure Pick a host to upgrade and migrate that host's virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide . Put the host into maintenance mode and remove the host from the Manager. For more information, see Removing a Host in the Administration Guide . Install Red Hat Enterprise Linux 8.6, or RHVH 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides. Install the appropriate packages to enable the host for RHV 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides. Add this host to the Manager, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Manager in one of the Installing Red Hat Virtualization guides. Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running Red Hat Virtualization 4.4. Additional resources Installing Red Hat Virtualization as a self-hosted engine using the command line Installing Red Hat Virtualization as a standalone Manager with local databases Installing Red Hat Virtualization as a standalone Manager with remote databases 1.1.9. Upgrading RHVH while preserving local storage Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade RHVH 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the local storage into the new domain. Prerequisites Red Hat Virtualization Manager 4.4 is installed and running. The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3. Procedure Ensure that the local storage on the RHVH 4.3 host's local storage is in maintenance mode before starting this process. Complete these steps: Open the Data Centers tab. Click the Storage tab in the Details pane and select the storage domain in the results list. Click Maintenance . Reinstall the Red Hat Virtualization Host, as described in Installing Red Hat Virtualization Host in the Installation Guide . Important When selecting the device on which to install RHVH from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed. If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device. # clearpart --all --drives= device For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation . On the reinstalled host, create a directory, for example /data in which to recover the environment. # mkdir /data Mount the local storage in the new directory. In our example, /dev/sdX1 is the local storage: # mount /dev/sdX1 /data Set the following permissions for the new directory. # chown -R 36:36 /data # chmod -R 0755 /data Red Hat recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot: # blkid | grep -i sdX1 /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4" # vi /etc/fstab UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data ext4 defaults 0 0 In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information. Add the host to the Manager. See Adding Standard Hosts to the Red Hat Virtualization Manager in one of the Installing Red Hat Virtualization guides for more information. On the host, create a new directory that will be used to create the initial local storage domain. For example: # mkdir -p /localfs # chown 36:36 /localfs # chmod -R 0755 /localfs In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain. Set the name to localfs and set the path to /localfs . Once the local storage is active, click Import Domain and set the domain's details. For example, define Data as the name, Local on Host as the storage type and /data as the path. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center. Activate the new storage domain: Open the Data Centers tab. Click the Storage tab in the details pane and select the new data storage domain in the results list. Click Activate . Once the new storage domain is active, import the virtual machines and their disks: In the Storage tab, select data . Select the VM Import tab in the details pane, select the virtual machines and click Import . See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode. Click the Storage tab and select localfs from the results list. Click the Data Center tab in the details pane. Click Maintenance, then click OK to move the storage domain to maintenance mode. Click Detach . The Detach Storage confirmation window opens. Click OK . You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines. 1.1.10. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. 1.1.11. Changing Virtual Machine Cluster Compatibility After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). The Manager virtual machine does not need to be rebooted. Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes. Procedure In the Administration Portal, click Compute Virtual Machines . Check which virtual machines require a reboot. In the Vms: search bar, enter the following query: next_run_config_exists=True The search results show all virtual machines with pending changes. Select each virtual machine and click Restart . Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself. When the virtual machine starts, the new compatibility version is automatically applied. Note You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. 1.1.12. Changing the Data Center Compatibility Version Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level. Prerequisites To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center. Procedure In the Administration Portal, click Compute Data Centers . Select the data center to change and click Edit . Change the Compatibility Version to the desired value. Click OK . The Change Data Center Compatibility Version confirmation dialog opens. Click OK to confirm. | [
"yum install rhv-log-collector-analyzer",
"rhv-log-collector-analyzer --live",
"rhv-log-collector-analyzer --live --html=/ directory / filename .html",
"yum install -y elinks",
"elinks /home/user1/analyzer_report.html",
"rhv-image-discrepancies",
"Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes No discrepancies found",
"hosted-engine --set-maintenance --mode=global",
"hosted-engine --vm-status",
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest",
"systemctl stop ovirt-engine",
"engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log",
"shutdown",
"hosted-engine --vm-status | grep -E 'Engine status|Hostname'",
"yum install ovirt-hosted-engine-setup",
"hosted-engine --deploy --restore-from-file=/ path /backup.bck",
"tmux hosted-engine --deploy --restore-from-file=backup.bck",
"systemctl stop ovirt-engine",
"yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc",
"engine-setup",
"clearpart --all --drives= device",
"mkdir /data",
"mount /dev/sdX1 /data",
"chown -R 36:36 /data chmod -R 0755 /data",
"blkid | grep -i sdX1 /dev/sdX1: UUID=\"a81a6879-3764-48d0-8b21-2898c318ef7c\" TYPE=\"ext4\" vi /etc/fstab UUID=\"a81a6879-3764-48d0-8b21-2898c318ef7c\" /data ext4 defaults 0 0",
"mkdir -p /localfs chown 36:36 /localfs chmod -R 0755 /localfs",
"next_run_config_exists=True"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/upgrade_guide/upgrading-self-hosted-engine-environment |
Chapter 289. Salesforce Component | Chapter 289. Salesforce Component Available as of Camel version 2.12 This component supports producer and consumer endpoints to communicate with Salesforce using Java DTOs. There is a companion maven plugin Camel Salesforce Plugin that generates these DTOs (see further below). Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-salesforce</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Note Developers wishing to contribute to the component are instructed to look at the README.md file on instructions on how to get started and setup your environment for running integration tests. 289.1. Authenticating to Salesforce The component supports three OAuth authentication flows: OAuth 2.0 Username-Password Flow OAuth 2.0 Refresh Token Flow OAuth 2.0 JWT Bearer Token Flow For each of the flow different set of properties needs to be set: Table 289.1. Properties to set for each authentication flow Property Where to find it on Salesforce Flow clientId Connected App, Consumer Key All flows clientSecret Connected App, Consumer Secret Username-Password, Refresh Token userName Salesforce user username Username-Password, JWT Bearer Token password Salesforce user password Username-Password refreshToken From OAuth flow callback Refresh Token keystore Connected App, Digital Certificate JWT Bearer Token The component auto determines what flow you're trying to configure, to be remove ambiguity set the authenticationType property. Note Using Username-Password Flow in production is not encouraged. Note The certificate used in JWT Bearer Token Flow can be a selfsigned certificate. The KeyStore holding the certificate and the private key must contain only single certificate-private key entry. 289.2. URI format When used as a consumer, receiving streaming events, the URI scheme is: salesforce:topic?options When used as a producer, invoking the Salesforce RSET APIs, the URI scheme is: salesforce:operationName?options You can append query options to the URI in the following format, ?option=value&option=value&... 289.3. Passing in Salesforce headers and fetching Salesforce response headers With Camel 2.21 there is support to pass Salesforce headers via inbound message headers, header names that start with Sforce or x-sfdc on the Camel message will be passed on in the request, and response headers that start with Sforce will be present in the outboud message headers. For example to fetch API limits you can specify: // in your Camel route set the header before Salesforce endpoint //... .setHeader("Sforce-Limit-Info", constant("api-usage")) .to("salesforce:getGlobalObjects") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); } } 289.4. Supported Salesforce APIs The component supports the following Salesforce APIs Producer endpoints can use the following APIs. Most of the APIs process one record at a time, the Query API can retrieve multiple Records. 289.4.1. Rest API You can use the following for operationName : getVersions - Gets supported Salesforce REST API versions getResources - Gets available Salesforce REST Resource endpoints getGlobalObjects - Gets metadata for all available SObject types getBasicInfo - Gets basic metadata for a specific SObject type getDescription - Gets comprehensive metadata for a specific SObject type getSObject - Gets an SObject using its Salesforce Id createSObject - Creates an SObject updateSObject - Updates an SObject using Id deleteSObject - Deletes an SObject using Id getSObjectWithId - Gets an SObject using an external (user defined) id field upsertSObject - Updates or inserts an SObject using an external id deleteSObjectWithId - Deletes an SObject using an external id query - Runs a Salesforce SOQL query queryMore - Retrieves more results (in case of large number of results) using result link returned from the 'query' API search - Runs a Salesforce SOSL query limits - fetching organization API usage limits recent - fetching recent items approval - submit a record or records (batch) for approval process approvals - fetch a list of all approval processes composite - submit up to 25 possibly related REST requests and receive individual responses composite-tree - create up to 200 records with parent-child relationships (up to 5 levels) in one go composite-batch - submit a composition of requests in batch queryAll - Runs a SOQL query. It returns the results that are deleted because of a merge or delete. Also returns the information about archived Task and Event records. getBlobField - Retrieves the specified blob field from an individual record. apexCall - Executes a user defined APEX REST API call. For example, the following producer endpoint uses the upsertSObject API, with the sObjectIdName parameter specifying 'Name' as the external id field. The request message body should be an SObject DTO generated using the maven plugin. The response message will either be null if an existing record was updated, or CreateSObjectResult with an id of the new record, or a list of errors while creating the new object. ...to("salesforce:upsertSObject?sObjectIdName=Name")... 289.4.2. Bulk 2.0 API The Bulk 2.0 API has a simplified model over the original Bulk API. Use it to quickly load a large amount of data into Salesforce, or query a large amount of data out of Salesforce. Data must be provided in CSV format. The minimum API version for Bulk 2.0 is v41.0. The minimum API version for Bulk Queries is v47.0. DTO classes mentioned below are from the org.apache.camel.component.salesforce.api.dto.bulkv2 package. The following operations are supported: bulk2CreateJob - Create a bulk job. Supply an instance of Job in the message body. bulk2GetJob - Get an existing Job. jobId parameter is required. bulk2CreateBatch - Add a Batch of CSV records to a job. Supply CSV data in the message body. The first row must contain headers. jobId parameter is required. bulk2CloseJob - Close a job. You must close the job in order for it to be processed or aborted/deleted. jobId parameter is required. bulk2AbortJob - Abort a job. jobId parameter is required. bulk2DeleteJob - Delete a job. jobId parameter is required. bulk2GetSuccessfulResults - Get successful results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetFailedResults - Get failed results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetUnprocessedRecords - Get unprocessed records for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetAllJobs - Get all jobs. Response body is an instance of Jobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. bulk2CreateQueryJob - Create a bulk query job. Supply an instance of QueryJob in the message body. bulk2GetQueryJob - Get a bulk query job. jobId parameter is required. bulk2GetQueryJobResults - Get bulk query job results. jobId parameter is required. bulk2AbortQueryJob - Abort a bulk query job. jobId parameter is required. bulk2DeleteQueryJob - Delete a bulk query job. jobId parameter is required. bulk2GetAllQueryJobs - Get all jobs. Response body is an instance of QueryJobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. 289.4.3. Rest Bulk (original) API Producer endpoints can use the following APIs. All Job data formats, i.e. xml, csv, zip/xml, and zip/csv are supported. The request and response have to be marshalled/unmarshalled by the route. Usually the request will be some stream source like a CSV file, and the response may also be saved to a file to be correlated with the request. You can use the following for operationName : createJob - Creates a Salesforce Bulk Job getJob - Gets a Job using its Salesforce Id closeJob - Closes a Job abortJob - Aborts a Job createBatch - Submits a Batch within a Bulk Job getBatch - Gets a Batch using Id getAllBatches - Gets all Batches for a Bulk Job Id getRequest - Gets Request data (XML/CSV) for a Batch getResults - Gets the results of the Batch when its complete createBatchQuery - Creates a Batch from an SOQL query getQueryResultIds - Gets a list of Result Ids for a Batch Query getQueryResult - Gets results for a Result Id getRecentReports - Gets up to 200 of the reports you most recently viewed by sending a GET request to the Report List resource. getReportDescription - Retrieves the report, report type, and related metadata for a report, either in a tabular or summary or matrix format. executeSyncReport - Runs a report synchronously with or without changing filters and returns the latest summary data. executeAsyncReport - Runs an instance of a report asynchronously with or without filters and returns the summary data with or without details. getReportInstances - Returns a list of instances for a report that you requested to be run asynchronously. Each item in the list is treated as a separate instance of the report. getReportResults: Contains the results of running a report. For example, the following producer endpoint uses the createBatch API to create a Job Batch. The in message must contain a body that can be converted into an InputStream (usually UTF-8 CSV or XML content from a file, etc.) and header fields 'jobId' for the Job and 'contentType' for the Job content type, which can be XML, CSV, ZIP_XML or ZIP_CSV. The put message body will contain BatchInfo on success, or throw a SalesforceException on error. ...to("salesforce:createBatchJob").. 289.4.4. Rest Streaming API Consumer endpoints can use the following sytax for streaming endpoints to receive Salesforce notifications on create/update. To create and subscribe to a topic from("salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")... To subscribe to an existing topic from("salesforce:CamelTestTopic&sObjectName=Merchandise__c")... 289.4.5. Platform events To emit a platform event use createSObject operation. And set the message body can be JSON string or InputStream with key-value data - in that case sObjectName needs to be set to the API name of the event, or a class that extends from AbstractDTOBase with the appropriate class name for the event. For example using a DTO: class Order_Event__e extends AbstractDTOBase { @JsonProperty("OrderNumber") private String orderNumber; // ... other properties and getters/setters } from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + String.valueOf(in.getHeader(Exchange.TIMER_COUNTER)); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to("salesforce:createSObject"); Or using JSON event data: from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + String.valueOf(in.getHeader(Exchange.TIMER_COUNTER)); in.setBody("{\"OrderNumber\":\"" + orderNumber + "\"}"); }) .to("salesforce:createSObject?sObjectName=Order_Event__e"); To receive platform events use the consumer endpoint with the API name of the platform event prefixed with event/ (or /event/ ), e.g.: salesforce:events/Order_Event__e . Processor consuming from that endpoint will receive either org.apache.camel.component.salesforce.api.dto.PlatformEvent object or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. For example, in the simplest form to consume one event: PlatformEvent event = consumer.receiveBody("salesforce:event/Order_Event__e", PlatformEvent.class); 289.5. Examples 289.5.1. Uploading a document to a ContentWorkspace Create the ContentVersion in Java, using a Processor instance: public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle("test document"); cv.setPathOnClient("test_doc.html"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here .... } } Give the output from the processor to the Salesforce component: from("file:///home/camel/library") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to("salesforce:createSObject"); 289.6. Using Salesforce Limits API With salesforce:limits operation you can fetch of API limits from Salesforce and then act upon that data received. The result of salesforce:limits operation is mapped to org.apache.camel.component.salesforce.api.dto.Limits class and can be used in a custom processors or expressions. For instance, consider that you need to limit the API usage of Salesforce so that 10% of daily API requests is left for other routes. The body of output message contains an instance of org.apache.camel.component.salesforce.api.dto.Limits object that can be used in conjunction with Content Based Router and Content Based Router and Spring Expression Language (SpEL) to choose when to perform queries. Notice how multiplying 1.0 with the integer value held in body.dailyApiRequests.remaining makes the expression evaluate as with floating point arithmetic, without it - it would end up making integral division which would result with either 0 (some API limits consumed) or 1 (no API limits consumed). from("direct:querySalesforce") .to("salesforce:limits") .choice() .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) .to("salesforce:query?...") .otherwise() .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) .endChoice() 289.7. Working with approvals All the properties are named exactly the same as in the Salesforce REST API prefixed with approval. . You can set approval properties by setting approval.PropertyName of the Endpoint these will be used as template - meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval property to a reference onto a bean in the Registry. You can also provide header values using the same approval.PropertyName in the incoming message headers. And finally body can contain one AprovalRequest or an Iterable of ApprovalRequest objects to process as a batch. The important thing to remember is the priority of the values specified in these three mechanisms: value in body takes precedence before any other value in message header takes precedence before template value value in template is set if no other value in header or body was given For example to send one record for approval using values in headers use: Given a route: from("direct:example1")// .setHeader("approval.ContextId", simple("USD{body['contextId']}")) .setHeader("approval.NextApproverIds", simple("USD{body['nextApproverIds']}")) .to("salesforce:approval?"// + "approval.actionType=Submit"// + "&approval.comments=this is a test"// + "&approval.processDefinitionNameOrId=Test_Account_Process"// + "&approval.skipEntryCriteria=true"); You could send a record for approval using: final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); 289.8. Using Salesforce Recent Items API To fetch the recent items use salesforce:recent operation. This operation returns an java.util.List of org.apache.camel.component.salesforce.api.dto.RecentItem objects ( List<RecentItem> ) that in turn contain the Id , Name and Attributes (with type and url properties). You can limit the number of returned items by specifying limit parameter set to maximum number of records to return. For example: from("direct:fetchRecentItems") to("salesforce:recent") .split().body() .log("USD{body.name} at USD{body.attributes.url}"); 289.9. Working with approvals All the properties are named exactly the same as in the Salesforce REST API prefixed with approval. . You can set approval properties by setting approval.PropertyName of the Endpoint these will be used as template - meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval property to a reference onto a bean in the Registry. You can also provide header values using the same approval.PropertyName in the incoming message headers. And finally body can contain one AprovalRequest or an Iterable of ApprovalRequest objects to process as a batch. The important thing to remember is the priority of the values specified in these three mechanisms: value in body takes precedence before any other value in message header takes precedence before template value value in template is set if no other value in header or body was given For example to send one record for approval using values in headers use: Given a route: from("direct:example1")// .setHeader("approval.ContextId", simple("USD{body['contextId']}")) .setHeader("approval.NextApproverIds", simple("USD{body['nextApproverIds']}")) .to("salesforce:approval?"// + "approvalActionType=Submit"// + "&approvalComments=this is a test"// + "&approvalProcessDefinitionNameOrId=Test_Account_Process"// + "&approvalSkipEntryCriteria=true"); You could send a record for approval using: final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); 289.10. Using Salesforce Composite API to submit SObject tree To create up to 200 records including parent-child relationships use salesforce:composite-tree operation. This requires an instance of org.apache.camel.component.salesforce.api.dto.composite.SObjectTree in the input message and returns the same tree of objects in the output message. The org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase instances within the tree get updated with the identifier values ( Id property) or their corresponding org.apache.camel.component.salesforce.api.dto.composite.SObjectNode is populated with errors on failure. Note that for some records operation can succeed and for some it can fail - so you need to manually check for errors. Easiest way to use this functionality is to use the DTOs generated by the camel-salesforce-maven-plugin , but you also have the option of customizing the references that identify the each object in the tree, for instance primary keys from your database. Lets look at an example: Account account = ... Contact president = ... Contact marketing = ... Account anotherAccount = ... Contact sales = ... Asset someAsset = ... // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId(); 289.11. Using Salesforce Composite API to submit multiple requests in a batch The Composite API batch operation ( composite-batch ) allows you to accumulate multiple requests in a batch and then submit them in one go, saving the round trip cost of multiple individual requests. Each response is then received in a list of responses with the order preserved, so that the n-th requests response is in the n-th place of the response. Note The results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers), so in general those will be more type friendly. Note that the responses will vary between XML and JSON, this is due to the responses from Salesforce API being different. So be careful if you switch between formats without changing the response handling code. Lets look at an example: final String acountId = ... final SObjectBatch batch = new SObjectBatch("38.0"); final Account updates = new Account(); updates.setName("NewName"); batch.addUpdate("Account", accountId, updates); final Account newAccount = new Account(); newAccount.setName("Account created from Composite batch API"); batch.addCreate(newAccount); batch.addGet("Account", accountId, "Name", "BillingPostalCode"); batch.addDelete("Account", accountId); final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch?format=JSON", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings("unchecked") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings("unchecked") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null 289.12. Using Salesforce Composite API to submit multiple chained requests The composite operation allows submitting up to 25 requests that can be chained together, for instance identifier generated in request can be used in subsequent request. Individual requests and responses are linked with the provided reference . Note Composite API supports only JSON payloads. Note As with the batch API the results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers), so in general those will be more type friendly. Lets look at an example: SObjectComposite composite = new SObjectComposite("38.0", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName("Salesforce"); updateAccount.setBillingStreet("Landmark @ 1 Market Street"); updateAccount.setBillingCity("San Francisco"); updateAccount.setBillingState("California"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); final Contact newContact = new TestContact(); newContact.setLastName("John Doe"); newContact.setPhone("1234567890"); composite.addCreate(newContact, "NewContact"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c("001xx000003DIpcAAG"); junction.setContactId__c("@{NewContact.id}"); composite.addCreate(junction, "JunctionRecord"); final SObjectCompositeResponse response = template.requestBody("salesforce:composite?format=JSON", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get() 289.13. Generating SOQL query strings org.apache.camel.component.salesforce.api.utils.QueryHelper contains helper methods to generate SOQL queries. For instance to fetch all custom fields from Account SObject you can simply generate the SOQL SELECT by invoking: String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom); 289.14. Camel Salesforce Maven Plugin This Maven plugin generates DTOs for the Camel Salesforce . For obvious security reasons it is recommended that the clientId, clientSecret, userName and password fields be not set in the pom.xml. The plugin should be configured for the rest of the properties, and can be executed using the following command: mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> \ -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password> The generated DTOs use Jackson and XStream annotations. All Salesforce field types are supported. Date and time fields are mapped to java.time.ZonedDateTime by default, and picklist fields are mapped to generated Java Enumerations. 289.15. Options The Salesforce component supports 31 options, which are listed below. Name Description Default Type authenticationType (security) Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. AuthenticationType loginConfig (security) All authentication configuration in one nested bean, all properties set there can be set directly on the component as well SalesforceLoginConfig instanceUrl (security) URL of the Salesforce instance used after authantication, by default received from Salesforce on successful authentication String loginUrl (security) Required URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com https://login.salesforce.com String clientId (security) Required OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String clientSecret (security) OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String keystore (security) KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. KeyStoreParameters refreshToken (security) Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String userName (security) Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String password (security) Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String lazyLogin (security) If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generaly set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false boolean config (common) Global endpoint configuration - use to set values that are common to all endpoints SalesforceEndpoint Config httpClientProperties (common) Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map longPollingTransport Properties (common) Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api Map sslContextParameters (security) SSL parameters to use, see SSLContextParameters class for all available options. SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters false boolean httpProxyHost (proxy) Hostname of the HTTP proxy server to use. String httpProxyPort (proxy) Port number of the HTTP proxy server to use. Integer httpProxyUsername (security) Username to use to authenticate against the HTTP proxy server. String httpProxyPassword (security) Password to use to authenticate against the HTTP proxy server. String isHttpProxySocks4 (proxy) If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false boolean isHttpProxySecure (security) If set to false disables the use of TLS when accessing the HTTP proxy. true boolean httpProxyIncluded Addresses (proxy) A list of addresses for which HTTP proxy server should be used. Set httpProxyExcluded Addresses (proxy) A list of addresses for which HTTP proxy server should not be used. Set httpProxyAuthUri (security) Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String httpProxyRealm (security) Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String httpProxyUseDigest Auth (security) If set to true Digest authentication will be used when authenticating to the HTTP proxy,otherwise Basic authorization method will be used false boolean packages (common) In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. String[] queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String jobType (common) Gets information only about jobs matching the specified job type. Possible values are: Classic Bulk API jobs (this includes both query jobs and ingest jobs). V2Query Bulk API 2.0 query jobs. V2Ingest Bulk API 2.0 ingest (upload and upsert) jobs. String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Salesforce endpoint is configured using URI syntax: with the following path and query parameters: 289.15.1. Path Parameters (2 parameters): Name Description Default Type operationName The operation to use. There are 59 enums and the value can be one of: getVersions, getResources, getGlobalObjects, getBasicInfo, getDescription, getSObject, createSObject, updateSObject, deleteSObject, getSObjectWithId, upsertSObject, deleteSObjectWithId, getBlobField, query, queryMore, queryAll, search, apexCall, recent, createJob, getJob, closeJob, abortJob, createBatch, getBatch, getAllBatches, getRequest, getResults, createBatchQuery, getQueryResultIds, getQueryResult, getRecentReports, getReportDescription, executeSyncReport, executeAsyncReport, getReportInstances, getReportResults, limits, approval, approvals, composite-tree, composite-batch, composite, bulk2GetAllJobs, bulk2CreateJob, bulk2GetJob, bulk2CreateBatch, bulk2CloseJob, bulk2AbortJob, bulk2DeleteJob, bulk2GetSuccessfulResults, bulk2GetFailedResults, bulk2GetUnprocessedRecords, bulk2CreateQueryJob, bulk2GetQueryJob, bulk2GetAllQueryJobs, bulk2GetQueryJobResults, bulk2AbortQueryJob, bulk2DeleteQueryJob OperationName topicName The name of the topic to use String 289.15.2. Query Parameters (46 parameters): Name Description Default Type apexMethod (common) APEX method name String apexQueryParams (common) Query params for APEX method Map apexUrl (common) APEX method URL String apiVersion (common) Salesforce API version, defaults to SalesforceEndpointConfig.DEFAULT_VERSION String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. long batchId (common) Bulk API Batch ID String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID String jobId (common) Bulk API Job ID String jobType (common) Gets information only about jobs matching the specified job type. Possible values are: Classic Bulk API jobs (this includes both query jobs and ingest jobs). V2Query Bulk API 2.0 query jobs. V2Ingest Bulk API 2.0 ingest (upload and upsert) jobs. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. long notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0) Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0) Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0) NotifyForOperations Enum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0) Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0) Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default false boolean reportId (common) Salesforce1 Analytics report Id String reportMetadata (common) Salesforce1 Analytics report metadata for filtering ReportMetadata resultId (common) Bulk API Result ID String serializeNulls (common) Should the NULL values of given DTO be serialized with empty (NULL) values. This affects only JSON data format. false boolean sObjectBlobFieldName (common) SObject blob field name String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin String sObjectFields (common) SObject fields to retrieve String sObjectId (common) SObject ID if required by API String sObjectIdName (common) SObject external ID field name String sObjectIdValue (common) SObject external ID field value String sObjectName (common) SObject name if required or supported by API String sObjectQuery (common) Salesforce SOQL query string String sObjectSearch (common) Salesforce SOSL search string String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean replayId (consumer) The replayId value to use when subscribing Long exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 289.16. Spring Boot Auto-Configuration The component supports 85 options, which are listed below. Name Description Default Type camel.component.salesforce.authentication-type Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. AuthenticationType camel.component.salesforce.client-id OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String camel.component.salesforce.client-secret OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String camel.component.salesforce.config.apex-method APEX method name String camel.component.salesforce.config.apex-query-params Query params for APEX method Map camel.component.salesforce.config.apex-url APEX method URL String camel.component.salesforce.config.api-version Salesforce API version, defaults to SalesforceEndpointConfig.DEFAULT_VERSION String camel.component.salesforce.config.approval The approval request for Approval API. @param approval ApprovalRequest camel.component.salesforce.config.approval-action-type Represents the kind of action to take: Submit, Approve, or Reject. @param actionType ApprovalRequestUSDAction camel.component.salesforce.config.approval-comments The comment to add to the history step associated with this request. @param comments String camel.component.salesforce.config.approval-context-actor-id The ID of the submitter who's requesting the approval record. @param contextActorId String camel.component.salesforce.config.approval-context-id The ID of the item that is being acted upon. @param contextId String camel.component.salesforce.config.approval--approver-ids If the process requires specification of the approval, the ID of the user to be assigned the request. @param nextApproverIds List camel.component.salesforce.config.approval-process-definition-name-or-id The developer name or ID of the process definition. @param processDefinitionNameOrId String camel.component.salesforce.config.approval-skip-entry-criteria Determines whether to evaluate the entry criteria for the process (true) or not (false) if the process definition name or ID isn't null. If the process definition name or ID isn't specified, this argument is ignored, and standard evaluation is followed based on process order. By default, the entry criteria isn't skipped if it's not set by this request. @param skipEntryCriteria Boolean camel.component.salesforce.config.backoff-increment Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. Long camel.component.salesforce.config.batch-id Bulk API Batch ID String camel.component.salesforce.config.content-type Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV ContentType camel.component.salesforce.config.default-replay-id Default replayId setting if no value is found in initialReplayIdMap Long camel.component.salesforce.config.format Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON PayloadFormat camel.component.salesforce.config.http-client Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient camel.component.salesforce.config.include-details Include details in Salesforce1 Analytics report, defaults to false. Boolean camel.component.salesforce.config.initial-replay-id-map Replay IDs to start from per channel name. Map camel.component.salesforce.config.instance-id Salesforce1 Analytics report execution instance ID String camel.component.salesforce.config.job-id Bulk API Job ID String camel.component.salesforce.config.limit Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer camel.component.salesforce.config.max-backoff Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. Long camel.component.salesforce.config.not-found-behaviour Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. NotFoundBehaviour camel.component.salesforce.config.notify-for-fields Notify for fields, options are ALL, REFERENCED, SELECT, WHERE NotifyForFieldsEnum camel.component.salesforce.config.notify-for-operation-create Notify for create operation, defaults to false (API version = 29.0) Boolean camel.component.salesforce.config.notify-for-operation-delete Notify for delete operation, defaults to false (API version = 29.0) Boolean camel.component.salesforce.config.notify-for-operation-undelete Notify for un-delete operation, defaults to false (API version = 29.0) Boolean camel.component.salesforce.config.notify-for-operation-update Notify for update operation, defaults to false (API version = 29.0) Boolean camel.component.salesforce.config.notify-for-operations Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0) NotifyForOperations Enum camel.component.salesforce.config.object-mapper Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper camel.component.salesforce.config.raw-payload Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default false Boolean camel.component.salesforce.config.report-id Salesforce1 Analytics report Id String camel.component.salesforce.config.report-metadata Salesforce1 Analytics report metadata for filtering ReportMetadata camel.component.salesforce.config.result-id Bulk API Result ID String camel.component.salesforce.config.s-object-blob-field-name SObject blob field name String camel.component.salesforce.config.s-object-class Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin String camel.component.salesforce.config.s-object-fields SObject fields to retrieve String camel.component.salesforce.config.s-object-id SObject ID if required by API String camel.component.salesforce.config.s-object-id-name SObject external ID field name String camel.component.salesforce.config.s-object-id-value SObject external ID field value String camel.component.salesforce.config.s-object-name SObject name if required or supported by API String camel.component.salesforce.config.s-object-query Salesforce SOQL query string String camel.component.salesforce.config.s-object-search Salesforce SOSL search string String camel.component.salesforce.config.serialize-nulls Should the NULL values of given DTO be serialized with empty (NULL) values. This affects only JSON data format. false Boolean camel.component.salesforce.config.update-topic Whether to update an existing Push Topic when using the Streaming API, defaults to false false Boolean camel.component.salesforce.enabled Enable salesforce component true Boolean camel.component.salesforce.http-client-properties Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map camel.component.salesforce.http-proxy-auth-uri Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String camel.component.salesforce.http-proxy-excluded-addresses A list of addresses for which HTTP proxy server should not be used. Set camel.component.salesforce.http-proxy-host Hostname of the HTTP proxy server to use. String camel.component.salesforce.http-proxy-included-addresses A list of addresses for which HTTP proxy server should be used. Set camel.component.salesforce.http-proxy-password Password to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-proxy-port Port number of the HTTP proxy server to use. Integer camel.component.salesforce.http-proxy-realm Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String camel.component.salesforce.http-proxy-use-digest-auth If set to true Digest authentication will be used when authenticating to the HTTP proxy,otherwise Basic authorization method will be used false Boolean camel.component.salesforce.http-proxy-username Username to use to authenticate against the HTTP proxy server. String camel.component.salesforce.instance-url URL of the Salesforce instance used after authantication, by default received from Salesforce on successful authentication String camel.component.salesforce.is-http-proxy-secure If set to false disables the use of TLS when accessing the HTTP proxy. true Boolean camel.component.salesforce.is-http-proxy-socks4 If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false Boolean camel.component.salesforce.keystore KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. The option is a org.apache.camel.util.jsse.KeyStoreParameters type. String camel.component.salesforce.lazy-login If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generaly set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false Boolean camel.component.salesforce.login-config.client-id Salesforce connected application Consumer Key String camel.component.salesforce.login-config.client-secret Salesforce connected application Consumer Secret String camel.component.salesforce.login-config.instance-url String camel.component.salesforce.login-config.keystore Keystore parameters for keystore containing certificate and private key needed for OAuth 2.0 JWT Bearer Token Flow. KeyStoreParameters camel.component.salesforce.login-config.lazy-login Flag to enable/disable lazy OAuth, default is false. When enabled, OAuth token retrieval or generation is not done until the first API call Boolean camel.component.salesforce.login-config.login-url Salesforce login URL, defaults to https://login.salesforce.com String camel.component.salesforce.login-config.password Salesforce account password String camel.component.salesforce.login-config.refresh-token Salesforce connected application Consumer token String camel.component.salesforce.login-config.type AuthenticationType camel.component.salesforce.login-config.user-name Salesforce account user name String camel.component.salesforce.login-url URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com https://login.salesforce.com String camel.component.salesforce.long-polling-transport-properties Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api Map camel.component.salesforce.packages In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. String[] camel.component.salesforce.password Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String camel.component.salesforce.refresh-token Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String camel.component.salesforce.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.salesforce.ssl-context-parameters SSL parameters to use, see SSLContextParameters class for all available options. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.salesforce.use-global-ssl-context-parameters Enable usage of global SSL context parameters false Boolean camel.component.salesforce.user-name Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-salesforce</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"salesforce:topic?options",
"salesforce:operationName?options",
"// in your Camel route set the header before Salesforce endpoint // .setHeader(\"Sforce-Limit-Info\", constant(\"api-usage\")) .to(\"salesforce:getGlobalObjects\") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader(\"Sforce-Limit-Info\", String.class); } }",
"...to(\"salesforce:upsertSObject?sObjectIdName=Name\")",
"...to(\"salesforce:createBatchJob\")..",
"from(\"salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c\")",
"from(\"salesforce:CamelTestTopic&sObjectName=Merchandise__c\")",
"class Order_Event__e extends AbstractDTOBase { @JsonProperty(\"OrderNumber\") private String orderNumber; // ... other properties and getters/setters } from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + String.valueOf(in.getHeader(Exchange.TIMER_COUNTER)); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to(\"salesforce:createSObject\");",
"from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + String.valueOf(in.getHeader(Exchange.TIMER_COUNTER)); in.setBody(\"{\\\"OrderNumber\\\":\\\"\" + orderNumber + \"\\\"}\"); }) .to(\"salesforce:createSObject?sObjectName=Order_Event__e\");",
"PlatformEvent event = consumer.receiveBody(\"salesforce:event/Order_Event__e\", PlatformEvent.class);",
"public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle(\"test document\"); cv.setPathOnClient(\"test_doc.html\"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here . } }",
"from(\"file:///home/camel/library\") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to(\"salesforce:createSObject\");",
"from(\"direct:querySalesforce\") .to(\"salesforce:limits\") .choice() .when(spel(\"#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}\")) .to(\"salesforce:query?...\") .otherwise() .setBody(constant(\"Used up Salesforce API limits, leaving 10% for critical routes\")) .endChoice()",
"from(\"direct:example1\")// .setHeader(\"approval.ContextId\", simple(\"USD{body['contextId']}\")) .setHeader(\"approval.NextApproverIds\", simple(\"USD{body['nextApproverIds']}\")) .to(\"salesforce:approval?\"// + \"approval.actionType=Submit\"// + \"&approval.comments=this is a test\"// + \"&approval.processDefinitionNameOrId=Test_Account_Process\"// + \"&approval.skipEntryCriteria=true\");",
"final Map<String, String> body = new HashMap<>(); body.put(\"contextId\", accountIds.iterator().next()); body.put(\"nextApproverIds\", userId); final ApprovalResult result = template.requestBody(\"direct:example1\", body, ApprovalResult.class);",
"from(\"direct:fetchRecentItems\") to(\"salesforce:recent\") .split().body() .log(\"USD{body.name} at USD{body.attributes.url}\");",
"from(\"direct:example1\")// .setHeader(\"approval.ContextId\", simple(\"USD{body['contextId']}\")) .setHeader(\"approval.NextApproverIds\", simple(\"USD{body['nextApproverIds']}\")) .to(\"salesforce:approval?\"// + \"approvalActionType=Submit\"// + \"&approvalComments=this is a test\"// + \"&approvalProcessDefinitionNameOrId=Test_Account_Process\"// + \"&approvalSkipEntryCriteria=true\");",
"final Map<String, String> body = new HashMap<>(); body.put(\"contextId\", accountIds.iterator().next()); body.put(\"nextApproverIds\", userId); final ApprovalResult result = template.requestBody(\"direct:example1\", body, ApprovalResult.class);",
"Account account = Contact president = Contact marketing = Account anotherAccount = Contact sales = Asset someAsset = // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody(\"salesforce:composite-tree\", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId();",
"final String acountId = final SObjectBatch batch = new SObjectBatch(\"38.0\"); final Account updates = new Account(); updates.setName(\"NewName\"); batch.addUpdate(\"Account\", accountId, updates); final Account newAccount = new Account(); newAccount.setName(\"Account created from Composite batch API\"); batch.addCreate(newAccount); batch.addGet(\"Account\", accountId, \"Name\", \"BillingPostalCode\"); batch.addDelete(\"Account\", accountId); final SObjectBatchResponse response = template.requestBody(\"salesforce:composite-batch?format=JSON\", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings(\"unchecked\") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get(\"id\"); // id of the new account, this is for JSON, for XML it would be createData.get(\"Result\").get(\"id\") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings(\"unchecked\") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get(\"Name\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"Name\") final String accountBillingPostalCode = retrieveData.get(\"BillingPostalCode\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"BillingPostalCode\") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null",
"SObjectComposite composite = new SObjectComposite(\"38.0\", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName(\"Salesforce\"); updateAccount.setBillingStreet(\"Landmark @ 1 Market Street\"); updateAccount.setBillingCity(\"San Francisco\"); updateAccount.setBillingState(\"California\"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate(\"Account\", \"001xx000003DIpcAAG\", updateAccount, \"UpdatedAccount\"); final Contact newContact = new TestContact(); newContact.setLastName(\"John Doe\"); newContact.setPhone(\"1234567890\"); composite.addCreate(newContact, \"NewContact\"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c(\"001xx000003DIpcAAG\"); junction.setContactId__c(\"@{NewContact.id}\"); composite.addCreate(junction, \"JunctionRecord\"); final SObjectCompositeResponse response = template.requestBody(\"salesforce:composite?format=JSON\", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> \"UpdatedAccount\".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> \"JunctionRecord\".equals(r.getReferenceId())).findFirst().get()",
"String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);",
"mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password>",
"salesforce:operationName:topicName"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/salesforce-component |
Chapter 5. PersistentVolume [v1] | Chapter 5. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeSpec is the specification of a persistent volume. status object PersistentVolumeStatus is the current status of a persistent volume. 5.1.1. .spec Description PersistentVolumeSpec is the specification of a persistent volume. Type object Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. claimRef object ObjectReference contains enough information to let you inspect or modify the referred object. csi object Represents storage that is managed by an external CSI volume driver (Beta feature) fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. local object Local represents directly-attached storage with node affinity (Beta feature) mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. nodeAffinity object VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos object Represents a StorageOS persistent volume resource. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. vsphereVolume object Represents a vSphere volume resource. 5.1.2. .spec.awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 5.1.3. .spec.azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 5.1.4. .spec.azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key secretNamespace string secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod shareName string shareName is the azure Share Name 5.1.5. .spec.cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 5.1.6. .spec.cephfs.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.7. .spec.cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 5.1.8. .spec.cinder.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.9. .spec.claimRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.10. .spec.csi Description Represents storage that is managed by an external CSI volume driver (Beta feature) Type object Required driver volumeHandle Property Type Description controllerExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace controllerPublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace driver string driver is the name of the driver to use for this volume. Required. fsType string fsType to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". nodeExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodePublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodeStageSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace readOnly boolean readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes of the volume to publish. volumeHandle string volumeHandle is the unique volume name returned by the CSI volume plugin's CreateVolume to refer to the volume on all subsequent calls. Required. 5.1.11. .spec.csi.controllerExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.12. .spec.csi.controllerPublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.13. .spec.csi.nodeExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.14. .spec.csi.nodePublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.15. .spec.csi.nodeStageSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.16. .spec.fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 5.1.17. .spec.flexVolume Description FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace 5.1.18. .spec.flexVolume.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.19. .spec.flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 5.1.20. .spec.gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 5.1.21. .spec.glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod endpointsNamespace string endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 5.1.22. .spec.hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 5.1.23. .spec.iscsi Description ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is Target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun is iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 5.1.24. .spec.iscsi.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.25. .spec.local Description Local represents directly-attached storage with node affinity (Beta feature) Type object Required path Property Type Description fsType string fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a filesystem if unspecified. path string path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ... ). 5.1.26. .spec.nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 5.1.27. .spec.nodeAffinity Description VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. Type object Property Type Description required object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 5.1.28. .spec.nodeAffinity.required Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 5.1.29. .spec.nodeAffinity.required.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 5.1.30. .spec.nodeAffinity.required.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 5.1.31. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 5.1.32. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.33. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 5.1.34. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.35. .spec.photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 5.1.36. .spec.portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 5.1.37. .spec.quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 5.1.38. .spec.rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 5.1.39. .spec.rbd.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.40. .spec.scaleIO Description ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs" gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace sslEnabled boolean sslEnabled is the flag to enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 5.1.41. .spec.scaleIO.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.42. .spec.storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object ObjectReference contains enough information to let you inspect or modify the referred object. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 5.1.43. .spec.storageos.secretRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.44. .spec.vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 5.1.45. .status Description PersistentVolumeStatus is the current status of a persistent volume. Type object Property Type Description message string message is a human-readable message indicating details about why the volume is in this state. phase string phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase Possible enum values: - "Available" used for PersistentVolumes that are not yet bound Available volumes are held by the binder and matched to PersistentVolumeClaims - "Bound" used for PersistentVolumes that are bound - "Failed" used for PersistentVolumes that failed to be correctly recycled or deleted after being released from a claim - "Pending" used for PersistentVolumes that are not available - "Released" used for PersistentVolumes where the bound PersistentVolumeClaim was deleted released volumes must be recycled before becoming available again this phase is used by the persistent volume claim binder to signal to another process to reclaim the resource reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. 5.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumes DELETE : delete collection of PersistentVolume GET : list or watch objects of kind PersistentVolume POST : create a PersistentVolume /api/v1/watch/persistentvolumes GET : watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/persistentvolumes/{name} DELETE : delete a PersistentVolume GET : read the specified PersistentVolume PATCH : partially update the specified PersistentVolume PUT : replace the specified PersistentVolume /api/v1/watch/persistentvolumes/{name} GET : watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/persistentvolumes/{name}/status GET : read status of the specified PersistentVolume PATCH : partially update status of the specified PersistentVolume PUT : replace status of the specified PersistentVolume 5.2.1. /api/v1/persistentvolumes Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PersistentVolume Table 5.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.3. Body parameters Parameter Type Description body DeleteOptions schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolume Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolume Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body PersistentVolume schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/persistentvolumes Table 5.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/persistentvolumes/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PersistentVolume Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolume Table 5.17. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolume Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.19. Body parameters Parameter Type Description body Patch schema Table 5.20. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolume Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. Body parameters Parameter Type Description body PersistentVolume schema Table 5.23. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/persistentvolumes/{name} Table 5.24. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/persistentvolumes/{name}/status Table 5.27. Global path parameters Parameter Type Description name string name of the PersistentVolume Table 5.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PersistentVolume Table 5.29. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolume Table 5.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.31. Body parameters Parameter Type Description body Patch schema Table 5.32. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolume Table 5.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.34. Body parameters Parameter Type Description body PersistentVolume schema Table 5.35. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/persistentvolume-v1 |
Chapter 18. Upgrading to OpenShift Data Foundation | Chapter 18. Upgrading to OpenShift Data Foundation 18.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.12 and 4.13, or between z-stream updates like 4.13.0 and 4.13.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.12 to 4.13 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. After the upgrade, you must run step 1 of the workaround for BZ#2215462 as documented in the DR upgrade Known issues section of Release notes . Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of OpenShift Data Foundation. For more information, see Scaling storage guide . 18.2. Updating Red Hat OpenShift Data Foundation 4.12 to 4.13 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. Important Upgrading to 4.13 directly from any version older than 4.12 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.13.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.13 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 18.3. Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.13.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 18.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/upgrading-your-cluster_rhodf |
function::tz_gmtoff | function::tz_gmtoff Name function::tz_gmtoff - Return local time zone offset Synopsis Arguments None Description Returns the local time zone offset (seconds west of UTC), as passed by staprun at script startup only. | [
"tz_gmtoff()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tz-gmtoff |
20.16.9.12. Specifying boot order | 20.16.9.12. Specifying boot order To specify the boot order, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices> ... Figure 20.49. Specifying boot order For hypervisors which support it, you can set a specific NIC to be used for the network boot. The order of attributes determine the order in which devices will be tried during boot sequence. Note that the per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. | [
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-boot-order |
Chapter 15. Idling applications | Chapter 15. Idling applications Cluster administrators can idle applications to reduce resource consumption. This is useful when the cluster is deployed on a public cloud where cost is related to resource consumption. If any scalable resources are not in use, OpenShift Container Platform discovers and idles them by scaling their replicas to 0 . The time network traffic is directed to the resources, the resources are unidled by scaling up the replicas, and normal operation continues. Applications are made of services, as well as other scalable resources, such as deployment configs. The action of idling an application involves idling all associated resources. 15.1. Idling applications Idling an application involves finding the scalable resources (deployment configurations, replication controllers, and others) associated with a service. Idling an application finds the service and marks it as idled, scaling down the resources to zero replicas. You can use the oc idle command to idle a single service, or use the --resource-names-file option to idle multiple services. 15.1.1. Idling a single service Procedure To idle a single service, run: USD oc idle <service> 15.1.2. Idling multiple services Idling multiple services is helpful if an application spans across a set of services within a project, or when idling multiple services in conjunction with a script to idle multiple applications in bulk within the same project. Procedure Create a file containing a list of the services, each on their own line. Idle the services using the --resource-names-file option: USD oc idle --resource-names-file <filename> Note The idle command is limited to a single project. For idling applications across a cluster, run the idle command for each project individually. 15.2. Unidling applications Application services become active again when they receive network traffic and are scaled back up their state. This includes both traffic to the services and traffic passing through routes. Applications can also be manually unidled by scaling up the resources. Procedure To scale up a DeploymentConfig, run: USD oc scale --replicas=1 dc <dc_name> Note Automatic unidling by a router is currently only supported by the default HAProxy router. Note Services do not support automatic unidling if you configure Kuryr-Kubernetes as an SDN. | [
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/idling-applications |
Chapter 7. Designing the Replication Process | Chapter 7. Designing the Replication Process Replicating the directory contents increases the availability and performance of the directory service. Chapter 4, Designing the Directory Tree and Chapter 6, Designing the Directory Topology cover the design of the directory tree and the directory topology. This chapter addresses the physical and geographical location of the data and, specifically, how to use replication to ensure the data is available when and where it is needed. This chapter discusses uses for replication and offers advice on designing a replication strategy for the directory environment. 7.1. Introduction to Replication Replication is the mechanism that automatically copies directory data from one Red Hat Directory Server to another. Using replication, any directory tree or subtree (stored in its own database) can be copied between servers. The Directory Server that holds the main copy of the information automatically copies any updates to all replicas. Replication provides a high-availability directory service and can distribute the data geographically. In practical terms, replication provides the following benefits: Fault tolerance and failover - By replicating directory trees to multiple servers, the directory service is available even if hardware, software, or network problems prevent the directory client applications from accessing a particular Directory Server. Clients are referred to another Directory Server for read and write operations. Note Write failover is only possible with multi-supplier replication. Load balancing - Replicating the directory tree across servers reduces the access load on any given machine, thereby improving server response time. Higher performance and reduced response times - Replicating directory entries to a location close to users significantly improves directory response times. Local data management - Replication allows information to be owned and managed locally while sharing it with other Directory Servers across the enterprise. 7.1.1. Replication Concepts Always start planning replication by making the following fundamental decisions: What information to replicate. Which servers hold the main copy, or read-write replica , of that information. Which servers hold the read-only copy, or read-only replica , of that information. What should happen when a read-only replica receives an update request; that is, to which server it should refer the request. These decisions cannot be made effectively without an understanding of how the Directory Server handles these concepts. For example, decide what information to replicate, be aware of the smallest replication unit that the Directory Server can handle. The replication concepts used by the Directory Server provide a framework for thinking about the global decisions that need to be made. 7.1.1.1. Unit of Replication The smallest unit of replication is a database. An entire database can be replicated but not a subtree within a database. Therefore, when defining the directory tree, always consider replication. For more information on how to set up the directory tree, see Chapter 4, Designing the Directory Tree . The replication mechanism also requires that one database correspond to one suffix. A suffix (or namespace) that is distributed over two or more databases cannot be replicated. 7.1.1.2. Read-Write and Read-Only Replicas A database that participates in replication is defined as a replica . Directory Server supports two types of replicas: read-write and read-only. The read-write replicas contain main copies of directory information and can be updated. Read-only replicas refer all update operations to read-write replicas. 7.1.1.3. Suppliers and Consumers A server that stores a replica that is copied to a different server is called a supplier . A server that stores a replica that is copied from a different server is called a consumer . Generally speaking, the replica on the supplier server is a read-write replica; the replica on the consumer server is a read-only replica. However, the following exceptions apply: In the case of cascading replication , the hub supplier holds a read-only replica that it supplies to consumers. For more information, see Section 7.2.3, "Cascading Replication" . In the case of multi-supplier replication , the suppliers function as both suppliers and consumers for the same read-write replica. For more information, see Section 7.2.2, "Multi-Supplier Replication" . Note In the current version of Red Hat Directory Server, replication is always initiated by the supplier server, never by the consumer. This is unlike earlier versions of Directory Server, which allowed consumer-initiated replication (where consumer servers could retrieve data from a supplier server). Suppliers For any particular replica, the supplier server must: Respond to read requests and update requests from directory clients. Maintain state information and a changelog for the replica. Initiate replication to consumer servers. The supplier server is always responsible for recording the changes made to the read-write replicas that it manages, so the supplier server makes sure that any changes are replicated to consumer servers. Consumers A consumer server must: Respond to read requests. Refer update requests to a supplier server for the replica. Whenever a consumer server receives a request to add, delete, or change an entry, the request is referred to a supplier for the replica. The supplier server performs the request, then replicates the change. Hub Suppliers In the special case of cascading replication, the hub supplier must: Respond to read requests. Refer update requests to a supplier server for the replica. Initiate replication to consumer servers. For more information on cascading replication, see Section 7.2.3, "Cascading Replication" . 7.1.1.4. Replication and Changelogs Every supplier server maintains a changelog . A changelog is a record of the modifications that have occurred on a replica. The supplier server then replays these modifications on the replicas stored on consumer servers, or on other suppliers in the case of multi-supplier replication. When an entry is modified, a change record describing the LDAP operation that was performed is recorded in the changelog. The changelog size is maintained with two attributes, nsslapd-changelogmaxage or nsslapd-changelogmaxentries . These attributes trim the old changelogs to keep the changelog size reasonable. 7.1.1.5. Replication Agreement Directory Servers use replication agreements to define replication. A replication agreement describes replication between a single supplier and a single consumer. The agreement is configured on the supplier server. It identifies: The database to replicate. The consumer server to which the data is pushed. The times that replication can occur. The DN that the supplier server must use to bind (called the supplier bind DN ). How the connection is secured (TLS, Start TLS, client authentication, SASL, or simple authentication). Any attributes that will not be replicated (see Section 7.3.2, "Replicate Selected Attributes with Fractional Replication" ). 7.1.2. Data Consistency Consistency refers to how closely the contents of replicated databases match each other at a given point in time. Part of the configuration for replication between servers is to schedule updates. The supplier server always determines when consumer servers need to be updated and initiates replication. Directory Server offers the option of keeping replicas always synchronized or of scheduling updates for a particular time of day or day in the week. The advantage of keeping replicas constantly synchronized is that it provides better data consistency. The cost is the network traffic resulting from the frequent update operations. This solution is the best option when: There is a reliable, high-speed connection between servers. The client requests serviced by the directory service are mainly search, read, and compare operations, with relatively few update operations. If it is all right to have a lower level of data consistency, choose the frequency of updates that best suits the use patterns of the network or lowers the affect on network traffic. There are several situations where having scheduled updates instead of constant updates is the best solution: There are unreliable or intermittently available network connections. The client requests serviced by the directory service are mainly update operations. Communication costs have to be lowered. In the case of multi-supplier replication, the replicas on each supplier are said to be loosely consistent , because at any given time, there can be differences in the data stored on each supplier. This is true, even if the replicas are constantly synchronized, for two reasons: There is a latency in the propagation of update operations between suppliers. The supplier that serviced the update operation does not wait for the second supplier to validate it before returning an "operation successful" message to the client. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Replication_Process |
Chapter 304. Simple Language | Chapter 304. Simple Language Available as of Camel version 1.1 The Simple Expression Language was a really simple language when it was created, but has since grown more powerful. It is primarily intended for being a really small and simple language for evaluating Expressions and Predicates without requiring any new dependencies or knowledge of XPath ; so it is ideal for testing in camel-core. The idea was to cover 95% of the common use cases when you need a little bit of expression based script in your Camel routes. However for much more complex use cases you are generally recommended to choose a more expressive and powerful language such as: SpEL Mvel Groovy JavaScript OGNL one of the supported Scripting Languages The simple language uses USD{body } placeholders for complex expressions where the expression contains constant literals. The USD\{ } placeholders can be omitted if the expression is only the token itself. Tip Alternative syntax From Camel 2.5 onwards you can also use the alternative syntax which uses USDsimple{ } as placeholders. This can be used in situations to avoid clashes when using for example Spring property placeholder together with Camel. 304.1. Simple Language Changes in Camel 2.9 onwards The Simple language have been improved from Camel 2.9 onwards to use a better syntax parser, which can do index precise error messages, so you know exactly what is wrong and where the problem is. For example if you have made a typo in one of the operators, then previously the parser would not be able to detect this, and cause the evaluation to be true. There are a few changes in the syntax which are no longer backwards compatible. When using Simple language as a Predicate then the literal text must be enclosed in either single or double quotes. For example: "USD{body} == 'Camel'" . Notice how we have single quotes around the literal. The old style of using "body" and "header.foo" to refer to the message body and header is @deprecated, and it is encouraged to always use USD\{ } tokens for the built-in functions. The range operator now requires the range to be in single quote as well as shown: "USD{header.zip} between '30000..39999'" . To get the body of the in message: "body" , or "in.body" or "USD{body}" . A complex expression must use USD\{ } placeholders, such as: "Hello USD{in.header.name} how are you?" . You can have multiple functions in the same expression: "Hello USD{in.header.name} this is USD{in.header.me} speaking" . However you can not nest functions in Camel 2.8.x or older (i.e. having another USD\{ } placeholder in an existing, is not allowed). From Camel 2.9 onwards you can nest functions. 304.2. Simple Language options The Simple language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output) trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks 304.3. Variables Variable Type Description camelId String Camel 2.10: the CamelContext name camelContext. OGNL Object Camel 2.11: the CamelContext invoked using a Camel OGNL expression. exchange Exchange Camel 2.16: the Exchange exchange. OGNL Object Camel 2.16: the Exchange invoked using a Camel OGNL expression. exchangeId String Camel 2.3: the exchange id id String the input message id body Object the input body in.body Object the input body body. OGNL Object Camel 2.3: the input body invoked using a Camel OGNL expression. in.body. OGNL Object Camel 2.3: the input body invoked using a Camel OGNL expression. bodyAs( type ) Type Camel 2.3: Converts the body to the given type determined by its classname. The converted body can be null. bodyAs( type ). OGNL Object Camel 2.18: Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. The converted body can be null. mandatoryBodyAs( type ) Type Camel 2.5: Converts the body to the given type determined by its classname, and expects the body to be not null. mandatoryBodyAs( type ). OGNL Object Camel 2.18: Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. out.body Object the output body header.foo Object refer to the input foo header header[foo] Object Camel 2.9.2: refer to the input foo header headers.foo Object refer to the input foo header headers[foo] Object Camel 2.9.2: refer to the input foo header in.header.foo Object refer to the input foo header in.header[foo] Object Camel 2.9.2: refer to the input foo header in.headers.foo Object refer to the input foo header in.headers[foo] Object Camel 2.9.2: refer to the input foo header header.foo[bar] Object Camel 2.3: regard input foo header as a map and perform lookup on the map with bar as key in.header.foo[bar] Object Camel 2.3: regard input foo header as a map and perform lookup on the map with bar as key in.headers.foo[bar] Object Camel 2.3: regard input foo header as a map and perform lookup on the map with bar as key header.foo. OGNL Object Camel 2.3: refer to the input foo header and invoke its value using a Camel OGNL expression. in.header.foo. OGNL Object Camel 2.3: refer to the input foo header and invoke its value using a Camel OGNL expression. in.headers.foo. OGNL Object Camel 2.3: refer to the input foo header and invoke its value using a Camel OGNL expression. out.header.foo Object refer to the out header foo out.header[foo] Object Camel 2.9.2: refer to the out header foo out.headers.foo Object refer to the out header foo out.headers[foo] Object Camel 2.9.2: refer to the out header foo headerAs( key , type ) Type Camel 2.5: Converts the header to the given type determined by its classname headers Map Camel 2.9: refer to the input headers in.headers Map Camel 2.9: refer to the input headers property.foo Object Deprecated: refer to the foo property on the exchange exchangeProperty.foo Object Camel 2.15: refer to the foo property on the exchange property[foo] Object Deprecated: refer to the foo property on the exchange exchangeProperty[foo] Object Camel 2.15: refer to the foo property on the exchange property.foo. OGNL Object Deprecated: refer to the foo property on the exchange and invoke its value using a Camel OGNL expression. exchangeProperty.foo. OGNL Object Camel 2.15: refer to the foo property on the exchange and invoke its value using a Camel OGNL expression. sys.foo String refer to the system property sysenv.foo String Camel 2.3: refer to the system environment exception Object Camel 2.4: Refer to the exception object on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception. OGNL Object Camel 2.4: Refer to the exchange exception invoked using a Camel OGNL expression object exception.message String Refer to the exception.message on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception.stacktrace String Camel 2.6. Refer to the exception.stracktrace on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. date:_command_ Date Evaluates to a Date object. Supported commands are: now for current timestamp, in.header.xxx or header.xxx to use the Date object in the IN header with the key xxx. out.header.xxx to use the Date object in the OUT header with the key xxx. property.xxx to use the Date object in the exchange property with the key xxx. file for the last modified timestamp of the file (available with a File consumer). Command accepts offsets such as: now-24h or in.header.xxx+1h or even now+1h30m-100 . date:_command:pattern_ String Date formatting using java.text.SimpleDataFormat patterns. date-with-timezone:_command:timezone:pattern_ String Date formatting using java.text.SimpleDataFormat timezones and patterns. bean:_bean expression_ Object Invoking a bean expression using the Bean language. Specifying a method name you must use dot as separator. We also support the ?method=methodname syntax that is used by the Bean component. properties:_locations:key_ String Deprecated (use properties-location instead) Camel 2.3: Lookup a property with the given key. The locations option is optional. See more at Using PropertyPlaceholder. properties-location:_http://locationskey[locations:key]_ String Camel 2.14.1: Lookup a property with the given key. The locations option is optional. See more at Using PropertyPlaceholder. properties:key:default String Camel 2.14.1 : Lookup a property with the given key. If the key does not exists or has no value, then an optional default value can be specified. routeId String Camel 2.11: Returns the id of the current route the Exchange is being routed. threadName String Camel 2.3: Returns the name of the current thread. Can be used for logging purpose. ref:xxx Object Camel 2.6: To lookup a bean from the Registry with the given id. type:name.field Object Camel 2.11: To refer to a type or field by its FQN name. To refer to a field you can append .FIELD_NAME. For example you can refer to the constant field from Exchange as: org.apache.camel.Exchange.FILE_NAME null null Camel 2.12.3: represents a null random_(value)_ Integer *Camel 2.16.0:*returns a random Integer between 0 (included) and value (excluded) random_(min,max)_ Integer *Camel 2.16.0:*returns a random Integer between min (included) and max (excluded) collate(group) List Camel 2.17: The collate function iterates the message body and groups the data into sub lists of specified size. This can be used with the Splitter EIP to split a message body and group/batch the splitted sub message into a group of N sub lists. This method works similar to the collate method in Groovy. skip(number) Iterator Camel 2.19: The skip function iterates the message body and skips the first number of items. This can be used with the Splitter EIP to split a message body and skip the first N number of items. messageHistory String Camel 2.17: The message history of the current exchange how it has been routed. This is similar to the route stack-trace message history the error handler logs in case of an unhandled exception. messageHistory(false) String Camel 2.17: As messageHistory but without the exchange details (only includes the route strack-trace). This can be used if you do not want to log sensitive data from the message itself. 304.4. OGNL expression support Available as of Camel 2.3 INFO:Camel's OGNL support is for invoking methods only. You cannot access fields. From Camel 2.11.1 onwards we added special support for accessing the length field of Java arrays. The Simple and Bean language now supports a Camel OGNL notation for invoking beans in a chain like fashion. Suppose the Message IN body contains a POJO which has a getAddress() method. Then you can use Camel OGNL notation to access the address object: simple("USD{body.address}") simple("USD{body.address.street}") simple("USD{body.address.zip}") Camel understands the shorthand names for getters, but you can invoke any method or use the real name such as: simple("USD{body.address}") simple("USD{body.getAddress.getStreet}") simple("USD{body.address.getZip}") simple("USD{body.doSomething}") You can also use the null safe operator ( ?. ) to avoid NPE if for example the body does NOT have an address simple("USD{body?.address?.street}") It is also possible to index in Map or List types, so you can do: simple("USD{body[foo].name}") To assume the body is Map based and lookup the value with foo as key, and invoke the getName method on that value. If the key has space, then you must enclose the key with quotes, for example 'foo bar': simple("USD{body['foo bar'].name}") You can access the Map or List objects directly using their key name (with or without dots) : simple("USD{body[foo]}") simple("USD{body[this.is.foo]}") Suppose there was no value with the key foo then you can use the null safe operator to avoid the NPE as shown: simple("USD{body[foo]?.name}") You can also access List types, for example to get lines from the address you can do: simple("USD{body.address.lines[0]}") simple("USD{body.address.lines[1]}") simple("USD{body.address.lines[2]}") There is a special last keyword which can be used to get the last value from a list. simple("USD{body.address.lines[last]}") And to get the 2nd last you can subtract a number, so we can use last-1 to indicate this: simple("USD{body.address.lines[last-1]}") And the 3rd last is of course: simple("USD{body.address.lines[last-2]}") And you can call the size method on the list with simple("USD{body.address.lines.size}") From Camel 2.11.1 onwards we added support for the length field for Java arrays as well, eg: String[] lines = new String[]{"foo", "bar", "cat"}; exchange.getIn().setBody(lines); simple("There are USD{body.length} lines") And yes you can combine this with the operator support as shown below: simple("USD{body.address.zip} > 1000") 304.5. Operator support The parser is limited to only support a single operator. To enable it the left value must be enclosed in USD\{ }. The syntax is: Where the rightValue can be a String literal enclosed in ' ' , null , a constant value or another expression enclosed in USD\{ }. Important There must be spaces around the operator. Camel will automatically type convert the rightValue type to the leftValue type, so it is able to eg. convert a string into a numeric so you can use > comparison for numeric values. The following operators are supported: Operator Description == equals =~ Camel 2.16: equals ignore case (will ignore case when comparing String values) > greater than >= greater than or equals < less than ⇐ less than or equals != not equals contains For testing if contains in a string based value not contains For testing if not contains in a string based value ~~ For testing if contains by ignoring case sensitivity in a string based value regex For matching against a given regular expression pattern defined as a String value not regex For not matching against a given regular expression pattern defined as a String value in For matching if in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. not in For matching if not in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. is For matching if the left hand side type is an instanceof the value. not is For matching if the left hand side type is not an instanceof the value. range For matching if the left hand side is within a range of values defined as numbers: from..to . From Camel 2.9 onwards the range values must be enclosed in single quotes. not range For matching if the left hand side is not within a range of values defined as numbers: from..to . From Camel 2.9 onwards the range values must be enclosed in single quotes. starts with Camel 2.17.1, 2.18 : For testing if the left hand side string starts with the right hand string. ends with Camel 2.17.1, 2.18 : For testing if the left hand side string ends with the right hand string. And the following unary operators can be used: Operator Description ++ Camel 2.9: To increment a number by one. The left hand side must be a function, otherwise parsed as literal. - Camel 2.9: To decrement a number by one. The left hand side must be a function, otherwise parsed as literal. \ Camel 2.9.3 to 2.10.x To escape a value, eg \USD, to indicate a USD sign. Special: Use \n for new line, \t for tab, and \r for carriage return. Notice: Escaping is not supported using the File Language . Notice: From Camel 2.11 onwards the escape character is no longer support, but replaced with the following three special escaping. \n Camel 2.11: To use newline character. \t Camel 2.11: To use tab character. \r Camel 2.11: To use carriage return character. \} Camel 2.18: To use the } character as text And the following logical operators can be used to group expressions: Operator Description and deprecated use && instead. The logical and operator is used to group two expressions. or deprecated use || instead. The logical or operator is used to group two expressions. && Camel 2.9: The logical and operator is used to group two expressions. || Camel 2.9: The logical or operator is used to group two expressions. Important Using and,or operators In Camel 2.4 or older the and or or can only be used once in a simple language expression. From Camel 2.5 onwards you can use these operators multiple times. The syntax for AND is: And the syntax for OR is: Some examples: // exact equals match simple("USD{in.header.foo} == 'foo'") // ignore case when comparing, so if the header has value FOO this will match simple("USD{in.header.foo} =~ 'foo'") // here Camel will type convert '100' into the type of in.header.bar and if it is an Integer '100' will also be converter to an Integer simple("USD{in.header.bar} == '100'") simple("USD{in.header.bar} == 100") // 100 will be converter to the type of in.header.bar so we can do > comparison simple("USD{in.header.bar} > 100") 304.5.1. Comparing with different types When you compare with different types such as String and int, then you have to take a bit care. Camel will use the type from the left hand side as 1st priority. And fallback to the right hand side type if both values couldn't be compared based on that type. This means you can flip the values to enforce a specific type. Suppose the bar value above is a String. Then you can flip the equation: simple("100 < USD{in.header.bar}") which then ensures the int type is used as 1st priority. This may change in the future if the Camel team improves the binary comparison operations to prefer numeric types over String based. It's most often the String type which causes problem when comparing with numbers. // testing for null simple("USD{in.header.baz} == null") // testing for not null simple("USD{in.header.baz} != null") And a bit more advanced example where the right value is another expression simple("USD{in.header.date} == USD{date:now:yyyyMMdd}") simple("USD{in.header.type} == USD{bean:orderService?method=getOrderType}") And an example with contains, testing if the title contains the word Camel simple("USD{in.header.title} contains 'Camel'") And an example with regex, testing if the number header is a 4 digit value: simple("USD{in.header.number} regex '\\d{4}'") And finally an example if the header equals any of the values in the list. Each element must be separated by comma, and no space around. This also works for numbers etc, as Camel will convert each element into the type of the left hand side. simple("USD{in.header.type} in 'gold,silver'") And for all the last 3 we also support the negate test using not: simple("USD{in.header.type} not in 'gold,silver'") And you can test if the type is a certain instance, eg for instance a String simple("USD{in.header.type} is 'java.lang.String'") We have added a shorthand for all java.lang types so you can write it as: simple("USD{in.header.type} is 'String'") Ranges are also supported. The range interval requires numbers and both from and end are inclusive. For instance to test whether a value is between 100 and 199: simple("USD{in.header.number} range 100..199") Notice we use .. in the range without spaces. It is based on the same syntax as Groovy. From Camel 2.9 onwards the range value must be in single quotes simple("USD{in.header.number} range '100..199'") 304.5.2. Using Spring XML As the Spring XML does not have all the power as the Java DSL with all its various builder methods, you have to resort to use some other languages for testing with simple operators. Now you can do this with the simple language. In the sample below we want to test if the header is a widget order: <from uri="seda:orders"> <filter> <simple>USD{in.header.type} == 'widget'</simple> <to uri="bean:orderService?method=handleWidget"/> </filter> </from> 304.6. Using and / or If you have two expressions you can combine them with the and or or operator. Tip Camel 2.9 onwards Use && or || from Camel 2.9 onwards. For instance: simple("USD{in.header.title} contains 'Camel' and USD{in.header.type'} == 'gold'") And of course the or is also supported. The sample would be: simple("USD{in.header.title} contains 'Camel' or USD{in.header.type'} == 'gold'") Notice: Currently and or or can only be used once in a simple language expression. This might change in the future. So you cannot do: simple("USD{in.header.title} contains 'Camel' and USD{in.header.type'} == 'gold' and USD{in.header.number} range 100..200") 304.7. Samples In the Spring XML sample below we filter based on a header value: <from uri="seda:orders"> <filter> <simple>USD{in.header.foo}</simple> <to uri="mock:fooOrders"/> </filter> </from> The Simple language can be used for the predicate test above in the Message Filter pattern, where we test if the in message has a foo header (a header with the key foo exists). If the expression evaluates to true then the message is routed to the mock:fooOrders endpoint, otherwise the message is dropped. The same example in Java DSL: from("seda:orders") .filter().simple("USD{in.header.foo}") .to("seda:fooOrders"); You can also use the simple language for simple text concatenations such as: from("direct:hello") .transform().simple("Hello USD{in.header.user} how are you?") .to("mock:reply"); Notice that we must use USD\{ } placeholders in the expression now to allow Camel to parse it correctly. And this sample uses the date command to output current date. from("direct:hello") .transform().simple("The today is USD{date:now:yyyyMMdd} and it is a great day.") .to("mock:reply"); And in the sample below we invoke the bean language to invoke a method on a bean to be included in the returned string: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator}") .to("mock:reply"); Where orderIdGenerator is the id of the bean registered in the Registry. If using Spring then it is the Spring bean id. If we want to declare which method to invoke on the order id generator bean we must prepend .method name such as below where we invoke the generateId method. from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator.generateId}") .to("mock:reply"); We can use the ?method=methodname option that we are familiar with the Bean component itself: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator?method=generateId}") .to("mock:reply"); And from Camel 2.3 onwards you can also convert the body to a given type, for example to ensure that it is a String you can do: <transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform> There are a few types which have a shorthand notation, so we can use String instead of java.lang.String . These are: byte[], String, Integer, Long . All other types must use their FQN name, e.g. org.w3c.dom.Document . It is also possible to lookup a value from a header Map in Camel 2.3 onwards: <transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform> In the code above we lookup the header with name type and regard it as a java.util.Map and we then lookup with the key gold and return the value. If the header is not convertible to Map an exception is thrown. If the header with name type does not exist null is returned. From Camel 2.9 onwards you can nest functions, such as shown below: <setHeader headerName="myHeader"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader> 304.8. Referring to constants or enums Available as of Camel 2.11 Suppose you have an enum for customers And in a Content Based Router we can use the Simple language to refer to this enum, to check the message which enum it matches. 304.9. Using new lines or tabs in XML DSLs Available as of Camel 2.9.3 From Camel 2.9.3 onwards it is easier to specify new lines or tabs in XML DSLs as you can escape the value now <transform> <simple>The following text\nis on a new line</simple> </transform> 304.10. Leading and trailing whitespace handling Available as of Camel 2.10.0 From Camel 2.10.0 onwards, the trim attribute of the expression can be used to control whether the leading and trailing whitespace characters are removed or preserved. The default value is true, which removes the whitespace characters. <setBody> <simple trim="false">You get some trailing whitespace characters. </simple> </setBody> 304.11. Setting result type Available as of Camel 2.8 You can now provide a result type to the Simple expression, which means the result of the evaluation will be converted to the desired type. This is most useable to define types such as booleans, integers, etc. For example to set a header as a boolean type you can do: .setHeader("cool", simple("true", Boolean.class)) And in XML DSL <setHeader headerName="cool"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType="java.lang.Boolean">true</simple> </setHeader> 304.12. Changing function start and end tokens Available as of Camel 2.9.1 You can configure the function start and end tokens - USD\{ } using the setters changeFunctionStartToken and changeFunctionEndToken on SimpleLanguage , using Java code. From Spring XML you can define a <bean> tag with the new changed tokens in the properties as shown below: <!-- configure Simple to use custom prefix/suffix tokens --> <bean id="simple" class="org.apache.camel.language.simple.SimpleLanguage"> <property name="functionStartToken" value="["/> <property name="functionEndToken" value="]"/> </bean> In the example above we use [ ] as the changed tokens. Notice by changing the start/end token you change those in all the Camel applications which share the same camel-core on their classpath. For example in an OSGi server this may affect many applications, where as a Web Application as a WAR file it only affects the Web Application. 304.13. Loading script from external resource Available as of Camel 2.11 You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").simple("resource:classpath:mysimple.txt") 304.14. Setting Spring beans to Exchange properties Available as of Camel 2.6 You can set a spring bean into an exchange property as shown below: <bean id="myBeanId" class="my.package.MyCustomClass" /> ... <route> ... <setProperty propertyName="monitoring.message"> <simple>ref:myBeanId</simple> </setProperty> ... </route> 304.15. Dependencies The Simple language is part of camel-core . | [
"simple(\"USD{body.address}\") simple(\"USD{body.address.street}\") simple(\"USD{body.address.zip}\")",
"simple(\"USD{body.address}\") simple(\"USD{body.getAddress.getStreet}\") simple(\"USD{body.address.getZip}\") simple(\"USD{body.doSomething}\")",
"simple(\"USD{body?.address?.street}\")",
"simple(\"USD{body[foo].name}\")",
"simple(\"USD{body['foo bar'].name}\")",
"simple(\"USD{body[foo]}\") simple(\"USD{body[this.is.foo]}\")",
"simple(\"USD{body[foo]?.name}\")",
"simple(\"USD{body.address.lines[0]}\") simple(\"USD{body.address.lines[1]}\") simple(\"USD{body.address.lines[2]}\")",
"simple(\"USD{body.address.lines[last]}\")",
"simple(\"USD{body.address.lines[last-1]}\")",
"simple(\"USD{body.address.lines[last-2]}\")",
"simple(\"USD{body.address.lines.size}\")",
"String[] lines = new String[]{\"foo\", \"bar\", \"cat\"}; exchange.getIn().setBody(lines); simple(\"There are USD{body.length} lines\")",
"simple(\"USD{body.address.zip} > 1000\")",
"USD{leftValue} OP rightValue",
"USD{leftValue} OP rightValue and USD{leftValue} OP rightValue",
"USD{leftValue} OP rightValue or USD{leftValue} OP rightValue",
"// exact equals match simple(\"USD{in.header.foo} == 'foo'\") // ignore case when comparing, so if the header has value FOO this will match simple(\"USD{in.header.foo} =~ 'foo'\") // here Camel will type convert '100' into the type of in.header.bar and if it is an Integer '100' will also be converter to an Integer simple(\"USD{in.header.bar} == '100'\") simple(\"USD{in.header.bar} == 100\") // 100 will be converter to the type of in.header.bar so we can do > comparison simple(\"USD{in.header.bar} > 100\")",
"simple(\"100 < USD{in.header.bar}\")",
"// testing for null simple(\"USD{in.header.baz} == null\") // testing for not null simple(\"USD{in.header.baz} != null\")",
"simple(\"USD{in.header.date} == USD{date:now:yyyyMMdd}\") simple(\"USD{in.header.type} == USD{bean:orderService?method=getOrderType}\")",
"simple(\"USD{in.header.title} contains 'Camel'\")",
"simple(\"USD{in.header.number} regex '\\\\d{4}'\")",
"simple(\"USD{in.header.type} in 'gold,silver'\")",
"simple(\"USD{in.header.type} not in 'gold,silver'\")",
"simple(\"USD{in.header.type} is 'java.lang.String'\")",
"simple(\"USD{in.header.type} is 'String'\")",
"simple(\"USD{in.header.number} range 100..199\")",
"simple(\"USD{in.header.number} range '100..199'\")",
"<from uri=\"seda:orders\"> <filter> <simple>USD{in.header.type} == 'widget'</simple> <to uri=\"bean:orderService?method=handleWidget\"/> </filter> </from>",
"simple(\"USD{in.header.title} contains 'Camel' and USD{in.header.type'} == 'gold'\")",
"simple(\"USD{in.header.title} contains 'Camel' or USD{in.header.type'} == 'gold'\")",
"simple(\"USD{in.header.title} contains 'Camel' and USD{in.header.type'} == 'gold' and USD{in.header.number} range 100..200\")",
"<from uri=\"seda:orders\"> <filter> <simple>USD{in.header.foo}</simple> <to uri=\"mock:fooOrders\"/> </filter> </from>",
"from(\"seda:orders\") .filter().simple(\"USD{in.header.foo}\") .to(\"seda:fooOrders\");",
"from(\"direct:hello\") .transform().simple(\"Hello USD{in.header.user} how are you?\") .to(\"mock:reply\");",
"from(\"direct:hello\") .transform().simple(\"The today is USD{date:now:yyyyMMdd} and it is a great day.\") .to(\"mock:reply\");",
"from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator}\") .to(\"mock:reply\");",
"from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator.generateId}\") .to(\"mock:reply\");",
"from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator?method=generateId}\") .to(\"mock:reply\");",
"<transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform>",
"<transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform>",
"<setHeader headerName=\"myHeader\"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader>",
"<transform> <simple>The following text\\nis on a new line</simple> </transform>",
"<setBody> <simple trim=\"false\">You get some trailing whitespace characters. </simple> </setBody>",
".setHeader(\"cool\", simple(\"true\", Boolean.class))",
"<setHeader headerName=\"cool\"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType=\"java.lang.Boolean\">true</simple> </setHeader>",
"<!-- configure Simple to use custom prefix/suffix tokens --> <bean id=\"simple\" class=\"org.apache.camel.language.simple.SimpleLanguage\"> <property name=\"functionStartToken\" value=\"[\"/> <property name=\"functionEndToken\" value=\"]\"/> </bean>",
".setHeader(\"myHeader\").simple(\"resource:classpath:mysimple.txt\")",
"<bean id=\"myBeanId\" class=\"my.package.MyCustomClass\" /> <route> <setProperty propertyName=\"monitoring.message\"> <simple>ref:myBeanId</simple> </setProperty> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/simple-language |
Subsets and Splits