title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 6. Removed functionalities
Chapter 6. Removed functionalities None.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.1_release_notes_and_known_issues/removed-functionalities
Chapter 9. StorageVersionMigration [migration.k8s.io/v1alpha1]
Chapter 9. StorageVersionMigration [migration.k8s.io/v1alpha1] Description StorageVersionMigration represents a migration of stored data to the latest storage version. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the migration. status object Status of the migration. 9.1.1. .spec Description Specification of the migration. Type object Required resource Property Type Description continueToken string The token used in the list options to get the chunk of objects to migrate. When the .status.conditions indicates the migration is "Running", users can use this token to check the progress of the migration. resource object The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. 9.1.2. .spec.resource Description The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. Type object Property Type Description group string The name of the group. resource string The name of the resource. version string The name of the version. 9.1.3. .status Description Status of the migration. Type object Property Type Description conditions array The latest available observations of the migration's current state. conditions[] object Describes the state of a migration at a certain point. 9.1.4. .status.conditions Description The latest available observations of the migration's current state. Type array 9.1.5. .status.conditions[] Description Describes the state of a migration at a certain point. Type object Required status type Property Type Description lastUpdateTime string The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of the condition. 9.2. API endpoints The following API endpoints are available: /apis/migration.k8s.io/v1alpha1/storageversionmigrations DELETE : delete collection of StorageVersionMigration GET : list objects of kind StorageVersionMigration POST : create a StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} DELETE : delete a StorageVersionMigration GET : read the specified StorageVersionMigration PATCH : partially update the specified StorageVersionMigration PUT : replace the specified StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status GET : read status of the specified StorageVersionMigration PATCH : partially update status of the specified StorageVersionMigration PUT : replace status of the specified StorageVersionMigration 9.2.1. /apis/migration.k8s.io/v1alpha1/storageversionmigrations Table 9.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of StorageVersionMigration Table 9.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind StorageVersionMigration Table 9.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.5. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigrationList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageVersionMigration Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 202 - Accepted StorageVersionMigration schema 401 - Unauthorized Empty 9.2.2. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} Table 9.9. Global path parameters Parameter Type Description name string name of the StorageVersionMigration Table 9.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a StorageVersionMigration Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.12. Body parameters Parameter Type Description body DeleteOptions schema Table 9.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageVersionMigration Table 9.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.15. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageVersionMigration Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.17. Body parameters Parameter Type Description body Patch schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageVersionMigration Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty 9.2.3. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status Table 9.22. Global path parameters Parameter Type Description name string name of the StorageVersionMigration Table 9.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified StorageVersionMigration Table 9.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.25. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StorageVersionMigration Table 9.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.27. Body parameters Parameter Type Description body Patch schema Table 9.28. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StorageVersionMigration Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.30. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/storageversionmigration-migration-k8s-io-v1alpha1
Chapter 4. Supporting services
Chapter 4. Supporting services 4.1. Job service The Job service schedules and executes tasks in a cloud environment. Independent services implement these tasks, which can be initiated through any of the supported interaction modes, including HTTP calls or Knative Events delivery. In OpenShift Serverless Logic, the Job service is responsible for controlling the execution of the time-triggered actions. Therefore, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job service. For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job service, and when the timeout is met, an HTTP callback is executed to notify the workflow. The main goal of the Job service is to manage active jobs, such as scheduled jobs that need to be executed. When a job reaches its final state, the Job service removes it. To retain jobs information in a permanent repository, the Job service produces status change events that can be recorded by an external service, such as the Data Index Service. Note You do not need to manually install or configure the Job service if you are using the OpenShift Serverless Operator to deploy workflows. The Operator handles these tasks automatically and manages all necessary configurations for each workflow to connect with it. 4.1.1. Job service leader election process The Job service operates as a singleton service, meaning only one active instance can schedule and execute jobs. To prevent conflicts when the service is deployed in the cloud, where multiple instances might be running, the Job service supports a leader election process. Only the instance that is elected as the leader manages external communication to receive and schedule jobs. Non-leader instances remain inactive in a standby state but continue attempting to become the leader through the election process. When a new instance starts, it does not immediately assume leadership. Instead, it enters the leader election process to determine if it can take over the leader role. If the current leader becomes unresponsive or if it is shut down, another running instance takes over as the leader. Note This leader election mechanism uses the underlying persistence backend, which is currently supported only in the PostgreSQL implementation. 4.2. Data Index service The Data Index service is a dedicated supporting service that stores the data related to the workflow instances and their associated jobs. This service provides a GraphQL endpoint allowing users to query that data. The Data Index service processes data received through events, which can originate from any workflow or directly from the Job service. Data Index supports Apache Kafka or Knative Eventing to consume CloudEvents messages from workflows. It indexes and stores this event data in a database, making it accessible through GraphQL. These events provide detailed information about the workflow execution. The Data Index service is central to OpenShift Serverless Logic search, insights, and management capabilities. The key features of the Data Index service are as follows: A flexible data structure A distributable, cloud-ready format Message-based communication with workflows via Apache Kafka, Knative, and CloudEvents A powerful GraphQL-based querying API Note When you are using the OpenShift Serverless Operator to deploy workflows, you do not need to manually install or configure the Data Index service. The Operator automatically manages all the necessary configurations for each workflow to connect with it. 4.2.1. GraphQL queries for workflow instances and jobs To retrieve data about workflow instances and jobs, you can use GraphQL queries. 4.2.1.1. Retrieve data from workflow instances You can retrieve information about a specific workflow instance by using the following query example: { ProcessInstances { id processId state parentProcessInstanceId rootProcessId rootProcessInstanceId variables nodes { id name type } } } 4.2.1.2. Retrieve data from jobs You can retrieve data from a specific job instance by using the following query example: { Jobs { id status priority processId processInstanceId executionCounter } } 4.2.1.3. Filter query results by using the where parameter You can filter query results by using the where parameter, allowing multiple combinations based on workflow attributes. Example query to filter by state { ProcessInstances(where: {state: {equal: ACTIVE}}) { id processId processName start state variables } } Example query to filter by ID { ProcessInstances(where: {id: {equal: "d43a56b6-fb11-4066-b689-d70386b9a375"}}) { id processId processName start state variables } } By default, filters are combined using the AND Operator. You can modify this behavior by combining filters with the AND or OR operators. Example query to combine filters with the OR Operator { ProcessInstances(where: {or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}) { id processId processName start end state } } Example query to combine filters with the AND and OR Operators { ProcessInstances(where: {and: {processId: {equal: "travels"}, or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}}) { id processId processName start end state } } Depending on the attribute type, you can use the following avaialable Operators: Attribute type Available Operators String array contains : String containsAll : Array of strings containsAny : Array of strings isNull : Boolean (true or false) String in : Array of strings like : String isNull : Boolean (true or false) equal : String ID in : Array of strings isNull : Boolean (true or false) equal : String Boolean isNull : Boolean (true or false) equal : Boolean (true or false) Numeric in : Array of integers isNull : Boolean equal : Integer greaterThan : Integer greaterThanEqual : Integer lessThan : Integer lessThanEqual : Integer between : Numeric range from : Integer to : Integer Date isNull : Boolean (true or false) equal : Date time greaterThan : Date time greaterThanEqual : Date time lessThan : Date time lessThanEqual : Date time between : Date range from : Date time to : Date time 4.2.1.4. Sort query results by using the orderBy parameter You can sort query results based on workflow attributes by using the orderBy parameter. You can also specify the sorting direction in an ascending ( ASC ) or a descending ( DESC ) order. Multiple attributes are applied in the order you specified. Example query to sort by the start time in an ASC order { ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}) { id processId processName start end state } } 4.2.1.5. Limit the number of results by using the pagination parameter You can control the number of returned results and specify an offset by using the pagination parameter. Example query to limit results to 10, starting from offset 0 { ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}, pagination: {limit: 10, offset: 0}) { id processId processName start end state } } 4.3. Managing supporting services This section provides an overview of the supporting services essential for OpenShift Serverless Logic. It specifically focuses on configuring and deploying the Data Index service and Job Service supporting services using the OpenShift Serverless Logic Operator. In a typical OpenShift Serverless Logic installation, you must deploy both services to ensure successful workflow execution. The Data Index service allows for efficient data management, while the Job Service ensures reliable job handling. 4.3.1. Supporting services and workflow integration When you deploy a supporting service in a given namespace, you can choose between an enabled or disabled deployment. An enabled deployment signals the OpenShift Serverless Logic Operator to automatically intercept workflow deployments using the preview or gitops profile within the namespace and configure them to connect with the service. For example, when the Data Index service is enabled, workflows are automatically configured to send status change events to it. Similarly, enabling the Job Service ensures that a job is created whenever a workflow requires a timeout. The OpenShift Serverless Logic Operator also configures the Job Service to send events to the Data Index service, facilitating seamless integration between the services. The OpenShift Serverless Logic Operator does not just deploy supporting services, it also manages other necessary configurations to ensure successful workflow execution. All these configurations are handled automatically. You only need to provide the supporting services configuration in the SonataFlowPlatform CR. Note Deploying only one of the supporting services or using a disabled deployment are advanced use cases. In a standard installation, you must enable both services to ensure smooth workflow execution. 4.3.2. Supporting services deployment with the SonataFlowPlatform CR To deploy supporting services, configure the dataIndex and jobService subfields within the spec.services section of the SonataFlowPlatform custom resource (CR). This configuration instructs the OpenShift Serverless Logic Operator to deploy each service when the SonataFlowPlatform CR is applied. Each configuration of a service is handled independently, allowing you to customize these settings alongside other configurations in the SonataFlowPlatform CR. See the following scaffold example configuration for deploying supporting services: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: 1 enabled: true 2 # Specific configurations for the Data Index Service # might be included here jobService: 3 enabled: true 4 # Specific configurations for the Job Service # might be included here 1 Data Index service configuration field. 2 Setting enabled: true deploys the Data Index service. If set to false or omitted, the deployment will be disabled. The default value is false . 3 Job Service configuration field. 4 Setting enabled: true deploys the Job Service. If set to false or omitted, the deployment will be disabled. The default value is false . 4.3.3. Supporting services scope The SonataFlowPlatform custom resource (CR) enables the deployment of supporting services within a specific namespace. This means all automatically configured supporting services and workflow communications are restricted to the namespace of the deployed platform. This feature is particularly useful when separate instances of supporting services are required for different sets of workflows. For example, you can deploy an application in isolation with its workflows and supporting services, ensuring they remain independent from other deployments. 4.3.4. Supporting services persistence configurations The persistence configuration for supporting services in OpenShift Serverless Logic can be either ephemeral or PostgreSQL, depending on needs of your environment. Ephemeral persistence is ideal for development and testing, while PostgreSQL persistence is recommended for production environments. 4.3.4.1. Ephemeral persistence configuration The ephemeral persistence uses an embedded PostgreSQL database that is dedicated to each service. The OpenShift Serverless Logic Operator recreates this database with every service restart, making it suitable only for development and testing purposes. You do not need any additional configuration other than the following SonataFlowPlatform CR: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true # Specific configurations for the Data Index Service # might be included here jobService: enabled: true # Specific configurations for the Job Service # might be included here 4.3.4.2. PostgreSQL persistence configuration For PostgreSQL persistence, you must set up a PostgreSQL server instance on your cluster. The administration of this instance remains independent of the OpenShift Serverless Logic Operator control. To connect a supporting service with the PostgreSQL server, you must configure the appropriate database connection parameters. You can configure PostgreSQL persistence in the SonataFlowPlatform CR by using the following example: Example of PostgreSQL persistence configuration apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 databaseSchema: data-index-schema 4 port: 1234 5 secretRef: name: postgres-secrets-example 6 userKey: POSTGRESQL_USER 7 passwordKey: POSTGRESQL_PASSWORD 8 jobService: enabled: true persistence: postgresql: # Specific database configuration for the Job Service # might be included here. 1 Name of the service to connect with the PostgreSQL database server. 2 Optional: Defines the namespace of the PostgreSQL Service. Defaults to the SonataFlowPlatform namespace. 3 Defines the name of the PostgreSQL database for storing supporting service data. 4 Optional: Specifies the schema for storing supporting service data. Default value is SonataFlowPlatform name, suffixed with -data-index-service or -jobs-service . For example, sonataflow-platform-example-data-index-service . 5 Optional: Port number to connect with the PostgreSQL Service. Default value is 5432 . 6 Defines the name of the secret containing the username and password for database access. 7 Defines the name of the key in the secret that contains the username to connect with the database. 8 Defines the name of the key in the secret that contains the password to connect with the database. Note You can configure each service's persistence independently by using the respective persistence field. Create the secrets to access PostgreSQL by running the following command: USD oc create secret generic <postgresql_secret_name> \ --from-literal=POSTGRESQL_USER=<user> \ --from-literal=POSTGRESQL_PASSWORD=<password> \ -n <namespace> 4.3.4.3. Common PostgreSQL persistence configuration The OpenShift Serverless Logic Operator automatically connects supporting services to the common PostgreSQL server configured in the spec.persistence field. For rules, the following precedence is applicable: If you configure a specific persistence for a supporting service, for example, services.dataIndex.persistence , it uses that configuration. If you do not configure persistence for a service, the system uses the common persistence configuration from the current platform. Note When using a common PostgreSQL configuration, each service schema is automatically set as the SonataFlowPlatform name, suffixed with -data-index-service or -jobs-service , for example, sonataflow-platform-example-data-index-service . 4.3.5. Supporting services eventing system configurations For a OpenShift Serverless Logic installation, the following types of events are generated: Outgoing and incoming events related to workflow business logic. Events sent from workflows to the Data Index and Job Service. Events sent from the Job Service to the Data Index Service. The OpenShift Serverless Logic Operator leverages the Knative Eventing system to manage all event communication between these events and services, ensuring efficient and reliable event handling. 4.3.5.1. Platform-scoped eventing system configuration To configure a platform-scoped eventing system, you can use the spec.eventing.broker.ref field in the SonataFlowPlatform CR to reference a Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the supporting services to produce and consume events by using the specified broker. A workflow deployed in the same namespace with the preview or gitops profile and without a custom eventing system configuration, automatically links to a specified broker. Important In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability. The following example displays how to configure the SonataFlowPlatform CR for a platform-scoped eventing system: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 namespace: example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker 1 Specifies the Knative Eventing Broker name. 2 Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform . 4.3.5.2. Service-scoped eventing system configuration A service-scoped eventing system configuration allows for fine-grained control over the eventing system, specifically for the Data Index or the Job Service. Note For a OpenShift Serverless Logic installation, consider using a platform-scoped eventing system configuration. The service-scoped configuration is intended for advanced use cases only. 4.3.5.3. Data Index eventing system configuration To configure a service-scoped eventing system for the Data Index, you must use the spec.services.dataIndex.source.ref field in the SonataFlowPlatform CR to refer to a specific Knative Eventing Broker. This configuration instructs the OpenShift Serverless Logic Operator to automatically link the Data Index to consume SonataFlow system events from that Broker. Important In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability. The following example displays the Data Index eventing system configuration: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: dataIndex: source: ref: name: data-index-source-example-broker 1 namespace: data-index-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker 1 Specifies the Knative Eventing Broker from which the Data Index consumes events. 2 Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the broker in the same namespace as SonataFlowPlatform . 4.3.5.4. Job Service eventing system configuration To configure a service-scoped eventing system for the Job Service, you must use the spec.services.jobService.source.ref and spec.services.jobService.sink.ref fields in the SonataFlowPlatform CR. These fields instruct the OpenShift Serverless Logic Operator to automatically link the Job Service to consume and produce SonataFlow system events, based on the provided configuration. Important In production environments, use a production-ready broker, such as the Knative Kafka Broker, for enhanced scalability and reliability. The following example displays the Job Service eventing system configuration: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: jobService: source: ref: name: jobs-service-source-example-broker 1 namespace: jobs-service-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker sink: ref: name: jobs-service-sink-example-broker 3 namespace: jobs-service-sink-example-broker-namespace 4 apiVersion: eventing.knative.dev/v1 kind: Broker 1 Specifies the Knative Eventing Broker from which the Job Service consumes events. 2 Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform . 3 Specifies the Knative Eventing Broker on which the Job Service produces events. 4 Optional: Defines the namespace of the Knative Eventing Broker. If you do not specify a value, the parameter defaults to the SonataFlowPlatform namespace. Consider creating the Broker in the same namespace as SonataFlowPlatform . 4.3.5.5. Cluster-scoped eventing system configuration for supporting services When you deploy cluster-scoped supporting services, the supporting services automatically link to the Broker specified in the SonataFlowPlatform CR, which is referenced by the SonataFlowClusterPlatform CR. 4.3.5.6. Eventing system configuration precedence rules for supporting services The OpenShift Serverless Logic Operator follows a defined order of precedence to configure the eventing system for a supporting service. Eventing system configuration precedence rules are as follows: If the supporting service has its own eventing system configuration, using either the Data Index eventing system or the Job Service eventing system configuration, then supporting service configuration takes precedence. If the SonataFlowPlatform CR enclosing the supporting service is configured with a platform-scoped eventing system, that configuration takes precedence. If the current cluster is configured with a cluster-scoped eventing system, that configuration takes precedence. f none of the configurations exist, the supporting service delivers events by direct HTTP calls. 4.3.5.7. Eventing system linking configuration The OpenShift Serverless Logic Operator automatically creates Knative Eventing, SinkBindings , and triggers to link supporting services with the eventing system. These objects enable the production and consumption of events by the supporting services. The following example displays the Knative Native eventing objects created for the SonataFlowPlatform CR: apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 apiVersion: eventing.knative.dev/v1 kind: Broker services: dataIndex: 2 enabled: true jobService: 3 enabled: true 1 Used by the Data Index, Job Service, and workflows, unless overridden. 2 Data Index ephemeral deployment, configures the Data Index service. 3 Job Service ephemeral deployment, configures the Job Service. The following example displays how to configure a Knative Kafka Broker for use with the SonataFlowPlatform CR: Example of Knative Kafka Broker example used by the SonataFlowPlatform CR apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-broker namespace: example-namespace spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config namespace: knative-eventing 1 Use the Kafka class to create a Kafka Knative Broker. The following command displays the list of triggers set up for the Data Index and Job Service events, showing which services are subscribed to the events: USD oc get triggers -n example-namespace Example output NAME BROKER SINK AGE CONDITIONS READY REASON data-index-jobs-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-definition-e48b4e4bf73e22b90ecf7e093ff6b1eaf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-error-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-instance-mul35f055c67a626f51bb8d2752606a6b54 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-node-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-state-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-variable-ac727d6051750888dedb72f697737c0dfbf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - jobs-service-create-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True - jobs-service-delete-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True - To see the SinkBinding resource for the Job Service, use the following command: USD oc get sources -n example-namespace Example output NAME TYPE RESOURCE SINK READY sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True 4.3.6. Advanced supporting services configurations In scenarios where you must apply advanced configurations for supporting services, use the podTemplate field in the SonataFlowPlatform custom resource (CR). This field allows you to customize the service pod deployment by specifying configurations like the number of replicas, environment variables, container images, and initialization options. You can configure advanced settings for the service by using the following example: Advanced configurations example for the Data Index service apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: # This can be either 'dataIndex' or 'jobService' dataIndex: enabled: true podTemplate: replicas: 2 1 container: 2 env: 3 - name: <any_advanced_config_property> value: <any_value> image: 4 initContainers: 5 Note You can set the 'services' field to either 'dataIndex' or 'jobService' depending on your requirement. The rest of the configuration remains the same. 1 Defines the number of replicas. Default value is 1 . In the case of jobService , this value is always overridden to 1 because it operates as a singleton service. 2 Holds specific configurations for the container running the service. 3 Allows you to fine-tune service properties by specifying environment variables. 4 Configures the container image for the service, useful if you need to update or customize the image. 5 Configures init containers for the pod, useful for setting up prerequisites before the main container starts. Note The podTemplate field provides flexibility for tailoring the deployment of each supporting service. It follows the standard PodSpec API, meaning the same API validation rules apply to these fields. 4.3.7. Cluster scoped supporting services You can define a cluster-wide set of supporting services that can be consumed by workflows across different namespaces, by using the SonataFlowClusterPlatform custom resource (CR). By referencing an existing namespace-specific SonataFlowPlatform CR, you can extend the use of these services cluster-wide. You can use the following example of a basic configuration that enables workflows deployed in any namespace to utilize supporting services deployed in a specific namespace, such as example-namespace : Example of a SonataFlowClusterPlatform CR apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowClusterPlatform metadata: name: cluster-platform spec: platformRef: name: sonataflow-platform-example 1 namespace: example-namespace 2 1 Specifies the name of the already installed SonataFlowPlatform CR that manages the supporting services. 2 Specifies the namespace of the SonataFlowPlatform CR that manages the supporting services. Note You can override these cluster-wide services within any namespace by configuring that namespace in SonataFlowPlatform.spec.services .
[ "{ ProcessInstances { id processId state parentProcessInstanceId rootProcessId rootProcessInstanceId variables nodes { id name type } } }", "{ Jobs { id status priority processId processInstanceId executionCounter } }", "{ ProcessInstances(where: {state: {equal: ACTIVE}}) { id processId processName start state variables } }", "{ ProcessInstances(where: {id: {equal: \"d43a56b6-fb11-4066-b689-d70386b9a375\"}}) { id processId processName start state variables } }", "{ ProcessInstances(where: {or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}) { id processId processName start end state } }", "{ ProcessInstances(where: {and: {processId: {equal: \"travels\"}, or: {state: {equal: ACTIVE}, rootProcessId: {isNull: false}}}}) { id processId processName start end state } }", "{ ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}) { id processId processName start end state } }", "{ ProcessInstances(where: {state: {equal: ACTIVE}}, orderBy: {start: ASC}, pagination: {limit: 10, offset: 0}) { id processId processName start end state } }", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: 1 enabled: true 2 # Specific configurations for the Data Index Service # might be included here jobService: 3 enabled: true 4 # Specific configurations for the Job Service # might be included here", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true # Specific configurations for the Data Index Service # might be included here jobService: enabled: true # Specific configurations for the Job Service # might be included here", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: dataIndex: enabled: true persistence: postgresql: serviceRef: name: postgres-example 1 namespace: postgres-example-namespace 2 databaseName: example-database 3 databaseSchema: data-index-schema 4 port: 1234 5 secretRef: name: postgres-secrets-example 6 userKey: POSTGRESQL_USER 7 passwordKey: POSTGRESQL_PASSWORD 8 jobService: enabled: true persistence: postgresql: # Specific database configuration for the Job Service # might be included here.", "oc create secret generic <postgresql_secret_name> --from-literal=POSTGRESQL_USER=<user> --from-literal=POSTGRESQL_PASSWORD=<password> -n <namespace>", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 namespace: example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: dataIndex: source: ref: name: data-index-source-example-broker 1 namespace: data-index-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example spec: services: jobService: source: ref: name: jobs-service-source-example-broker 1 namespace: jobs-service-source-example-broker-namespace 2 apiVersion: eventing.knative.dev/v1 kind: Broker sink: ref: name: jobs-service-sink-example-broker 3 namespace: jobs-service-sink-example-broker-namespace 4 apiVersion: eventing.knative.dev/v1 kind: Broker", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: eventing: broker: ref: name: example-broker 1 apiVersion: eventing.knative.dev/v1 kind: Broker services: dataIndex: 2 enabled: true jobService: 3 enabled: true", "apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-broker namespace: example-namespace spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config namespace: knative-eventing", "oc get triggers -n example-namespace", "NAME BROKER SINK AGE CONDITIONS READY REASON data-index-jobs-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-definition-e48b4e4bf73e22b90ecf7e093ff6b1eaf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-error-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-instance-mul35f055c67a626f51bb8d2752606a6b54 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-node-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-state-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - data-index-process-variable-ac727d6051750888dedb72f697737c0dfbf example-broker service:sonataflow-platform-example-data-index-service 106s 7 OK / 7 True - jobs-service-create-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True - jobs-service-delete-job-fbf285df-c0a4-4545-b77a-c232ec2890e2 example-broker service:sonataflow-platform-example-jobs-service 106s 7 OK / 7 True -", "oc get sources -n example-namespace", "NAME TYPE RESOURCE SINK READY sonataflow-platform-example-jobs-service-sb SinkBinding sinkbindings.sources.knative.dev broker:example-broker True", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: name: sonataflow-platform-example namespace: example-namespace spec: services: # This can be either 'dataIndex' or 'jobService' dataIndex: enabled: true podTemplate: replicas: 2 1 container: 2 env: 3 - name: <any_advanced_config_property> value: <any_value> image: 4 initContainers: 5", "apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowClusterPlatform metadata: name: cluster-platform spec: platformRef: name: sonataflow-platform-example 1 namespace: example-namespace 2" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serverless_logic/supporting-services
Preface
Preface Welcome to the Red Hat JBoss Core Services version 2.4.57 Service Pack 4 release. Red Hat JBoss Core Services Apache HTTP Server is an open source web server developed by the Apache Software Foundation . The Apache HTTP Server includes the following features: Implements the current HTTP standards, including HTTP/1.1 and HTTP/2 Supports Transport Layer Security (TLS) encryption through OpenSSL , which provides secure connections between the web server and web clients Supports extensible functionality through the use of modules, some of which are included with the Red Hat JBoss Core Services Apache HTTP Server
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/pr01
Chapter 4. Integrating Red Hat Satellite and Ansible Automation Controller
Chapter 4. Integrating Red Hat Satellite and Ansible Automation Controller You can integrate Red Hat Satellite and Ansible Automation Controller to use Satellite Server as a dynamic inventory source for Ansible Automation Controller. Ansible Automation Controller is a component of the Red Hat Ansible Automation Platform. You can also use the provisioning callback function to run playbooks on hosts managed by Satellite, from either the host or Ansible Automation Controller. When provisioning new hosts from Satellite Server, you can use the provisioning callback function to trigger playbook runs from Ansible Automation Controller. The playbook configures the host following Kickstart deployment. 4.1. Adding Satellite Server to Ansible Automation Controller as a Dynamic Inventory Item To add Satellite Server to Ansible Automation Controller as a dynamic inventory item, you must create a credential for a Satellite Server user on Ansible Automation Controller, add an Ansible Automation Controller user to the credential, and then configure an inventory source. Prerequisites If your Satellite deployment is large, for example, managing tens of thousands of hosts, using a non-admin user can negatively impact performance because of time penalties that accrue during authorization checks. For large deployments, consider using an admin user. For non-admin users, you must assign the Ansible Tower Inventory Reader role to your Satellite Server user. For more information about managing users, roles, and permission filters, see Creating and Managing Roles in Administering Red Hat Satellite . You must host your Satellite Server and Ansible Automation Controller on the same network or subnet. Procedure In the Ansible Automation Controller web UI, create a credential for your Satellite. For more information about creating credentials, see Add a New Credential and Red Hat Satellite Credentials in the Automation Controller User Guide . Table 4.1. Satellite Credentials Credential Type : Red Hat Satellite 6 Satellite URL : https:// satellite.example.com Username : The username of the Satellite user with the integration role. Password : The password of the Satellite user. Add an Ansible Automation Controller user to the new credential. For more information about adding a user to a credential, see Getting Started with Credentials in the Automation Controller User Guide . Add a new inventory. For more information, see Add a new inventory in the Automation Controller User Guide . In the new inventory, add Satellite Server as the inventory source, specifying the following inventory source options. For more information, see Add Source in the Automation Controller User Guide . Table 4.2. Inventory Source Options Source Red Hat Satellite 6 Credential The credential you create for Satellite Server. Overwrite Select Overwrite Variables Select Update on Launch Select Cache Timeout 90 Ensure that you synchronize the source that you add. 4.2. Configuring Provisioning Callback for a Host When you create hosts in Satellite, you can use Ansible Automation Controller to run playbooks to configure your newly created hosts. This is called provisioning callback in Ansible Automation Controller. The provisioning callback function triggers a playbook run from Ansible Automation Controller as part of the provisioning process. The playbook configures the host after Kickstart deployment. For more information about provisioning callbacks, see Provisioning Callbacks in the Automation Controller User Guide . In Satellite Server, the Kickstart Default and Kickstart Default Finish templates include three snippets: ansible_provisioning_callback ansible_tower_callback_script ansible_tower_callback_service You can add parameters to hosts or host groups to provide the credentials that these snippets can use to run Ansible playbooks on your newly created hosts. Prerequisites Before you can configure provisioning callbacks, you must add Satellite as a dynamic inventory in Ansible Automation Controller. For more information, see Integrating Satellite and Ansible Automation Controller . In the Ansible Automation Controller web UI, you must complete the following tasks: Create a machine credential for your new host. Ensure that you enter the same password in the credential that you plan to assign to the host that you create in Satellite. For more information, see Add a New Credential in the Automation Controller User Guide . Create a project. For more information, see Projects in the Ansible Automation Controller User Guide . Add a job template to your project. For more information, see Job Templates in the Automation Controller User Guide . In your job template, you must enable provisioning callbacks, generate the host configuration key, and note the template_ID of your job template. For more information about job templates, see Job Templates in the Automation Controller User Guide . Procedure In the Satellite web UI, navigate to Configure > Host Group . Create a host group or edit an existing host group. In the Host Group window, click the Parameters tab. Click Add Parameter . Enter the following information for each new parameter: Table 4.3. Host Parameters Name Value Description ansible_tower_provisioning true Enables Provisioning Callback. ansible_tower_fqdn controller.example.com The fully qualified domain name (FQDN) of your Ansible Automation Controller. Do not add https because this is appended by Satellite. ansible_job_template_id template_ID The ID of your provisioning template that you can find in the URL of the template: /templates/job_template/ 5 . ansible_host_config_key config_KEY The host configuration key that your job template generates in Ansible Automation Controller. Click Submit . Create a host using the host group. On the new host, enter the following command to start the ansible-callback service: On the new host, enter the following command to output the status of the ansible-callback service: Provisioning callback is configured correctly if the command returns the following output: Manual Provisioning Callback You can use the provisioning callback URL and the host configuration key from a host to call Ansible Automation Controller. For example: Ensure that you use https when you enter the provisioning callback URL. This triggers the playbook run specified in the template against the host.
[ "systemctl start ansible-callback", "systemctl status ansible-callback", "SAT_host systemd[1]: Started Provisioning callback to Ansible Automation Controller", "curl -k -s --data curl --insecure --data host_config_key= my_config_key https:// controller.example.com /api/v2/job_templates/ 8 /callback/" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_red_hat_satellite_to_use_ansible/integrating-ansible-tower_ansible
Chapter 66. Core engine API for the process engine
Chapter 66. Core engine API for the process engine The process engine executes business processes. To define the processes, you create business assets , including process definitions and custom tasks. You can use the Core Engine API to load, execute, and manage processes in the process engine. Several levels of control are available: At the lowest level, you can directly create a KIE base and a KIE session . A KIE base represents all the assets in a business process. A KIE session is an entity in the process engine that runs instances of a business process. This level provides fine-grained control, but requires explicit declaration and configuration of process instances, task handlers, event handlers, and other process engine entities in your code. You can use the RuntimeManager class to manage sessions and processes. This class provides sessions for required process instances using a configurable strategy. It automatically configures the interaction between the KIE session and task services. It disposes of process engine entities that are no longer necessary, ensuring optimal use of resources. You can use a fluent API to instantiate RuntimeManager with the necessary business assets and to configure its environment. You can use the Services API to manage the execution of processes. For example, the deployment service deploys business assets into the engine, forming a deployment unit . The process service runs a process from this deployment unit. If you want to embed the process engine in your application, the Services API is the most convenient option, because it hides the internal details of configuring and managing the engine. Finally, you can deploy a KIE Server that loads business assets from KJAR files and runs processes. KIE Server provides a REST API for loading and managing the processes. You can also use Business Central to manage a KIE Server. If you use KIE Server, you do not need to use the Core Engine API. For information about deploying and managing processes on a KIE Server, see Packaging and deploying an Red Hat Process Automation Manager project . For the full reference information for all public process engine API calls, see the Java documentation . Other API classes also exist in the code, but they are internal APIs that can be changed in later versions. Use public APIs in applications that you develop and maintain. 66.1. KIE base and KIE session A KIE base contains a reference to all process definitions and other assets relevant for a process. The engine uses this KIE base to look up all information for the process, or for several processes, whenever necessary. You can load assets into a KIE base from various sources, such as a class path, file system, or process repository. Creating a KIE base is a resource-heavy operation, as it involves loading and parsing assets from various sources. You can dynamically modify the KIE base to add or remove process definitions and other assets at run time. After you create a KIE base, you can instantiate a KIE session based on this KIE base. Use this KIE session to run processes based on the definitions in the KIE base. When you use the KIE session to start a process, a new process instance is created. This instance maintains a specific process state. Different instances in the same KIE session can use the same process definition but have different states. Figure 66.1. KIE base and KIE session in the process engine For example, if you develop an application to process sales orders, you can create one or more process definitions that determine how an order should be processed. When starting the application, you first need to create a KIE base that contains those process definitions. You can then create a session based on this KIE base. When a new sales order comes in, start a new process instance for the order. This process instance contains the state of the process for the specific sales request. You can create many KIE sessions for the same KIE base and you can create many instances of the process within the same KIE session. Creating a KIE session, and also creating a process instance within the KIE session, uses far fewer resources than creating a KIE base. If you modify a KIE base, all the KIE sessions that use it can use the modifications automatically. In most simple use cases, you can use a single KIE session to execute all processes. You can also use several sessions if needed. For example, if you want order processing for different customers to be completely independent, you can create a KIE session for each customer. You can also use multiple sessions for scalability reasons. In typical applications you do not need to create a KIE base or KIE session directly. However, when you use other levels of the process engine API, you can interact with elements of the API that this level defines. 66.1.1. KIE base The KIE base includes all process definitions and other assets that your application might need to execute a business process. To create a KIE base, use a KieHelper instance to load processes from various resources, such as the class path or the file system, and to create a new KIE base. The following code snippet shows how to create a KIE base consisting of only one process definition, which is loaded from the class path. Creating a KIE base containing one process definition KieHelper kieHelper = new KieHelper(); KieBase kieBase = kieHelper .addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn")) .build(); The ResourceFactory class has similar methods to load resources from a file, a URL, an InputStream, a Reader, and other sources. Note This "manual" process of creating a KIE base is simpler than other alternatives, but can make an application hard to maintain. Use other methods of creating a KIE base, such as the RuntimeManager class or the Services API, for applications that you expect to develop and maintain over long periods of time. 66.1.2. KIE session After creating and loading the KIE base, you can create a KIE session to interact with the process engine. You can use this session to start and manage processes and to signal events. The following code snippet creates a session based on the KIE base that you created previously and then starts a process instance, referencing the ID in the process definition. Creating a KIE session and starting a process instance KieSession ksession = kbase.newKieSession(); ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess"); 66.1.3. ProcessRuntime interface The KieSession class exposes the ProcessRuntime interface, which defines all the session methods for interacting with processes, as the following definition shows. Definition of the ProcessRuntime interface /** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. * * @param processId The ID of the process to start * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId); /** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. You can pass parameters * to the process instance as name-value pairs, and these parameters set * variables of the process instance. * * @param processId the ID of the process to start * @param parameters the process variables to set when starting the process instance * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId, Map<String, Object> parameters); /** * Signals the process engine that an event has occurred. The type parameter defines * the type of event and the event parameter can contain additional information * related to the event. All process instances that are listening to this type * of (external) event will be notified. For performance reasons, use this type of * event signaling only if one process instance must be able to notify * other process instances. For internal events within one process instance, use the * signalEvent method that also include the processInstanceId of the process instance * in question. * * @param type the type of event * @param event the data associated with this event */ void signalEvent(String type, Object event); /** * Signals the process instance that an event has occurred. The type parameter defines * the type of event and the event parameter can contain additional information * related to the event. All node instances inside the given process instance that * are listening to this type of (internal) event will be notified. Note that the event * will only be processed inside the given process instance. All other process instances * waiting for this type of event will not be notified. * * @param type the type of event * @param event the data associated with this event * @param processInstanceId the id of the process instance that should be signaled */ void signalEvent(String type, Object event, long processInstanceId); /** * Returns a collection of currently active process instances. Note that only process * instances that are currently loaded and active inside the process engine are returned. * When using persistence, it is likely not all running process instances are loaded * as their state is stored persistently. It is best practice not to use this * method to collect information about the state of your process instances but to use * a history log for that purpose. * * @return a collection of process instances currently active in the session */ Collection<ProcessInstance> getProcessInstances(); /** * Returns the process instance with the given ID. Note that only active process instances * are returned. If a process instance has been completed already, this method returns * null. * * @param id the ID of the process instance * @return the process instance with the given ID, or null if it cannot be found */ ProcessInstance getProcessInstance(long processInstanceId); /** * Aborts the process instance with the given ID. If the process instance has been completed * (or aborted), or if the process instance cannot be found, this method will throw an * IllegalArgumentException. * * @param id the ID of the process instance */ void abortProcessInstance(long processInstanceId); /** * Returns the WorkItemManager related to this session. This object can be used to * register new WorkItemHandlers or to complete (or abort) WorkItems. * * @return the WorkItemManager related to this session */ WorkItemManager getWorkItemManager(); 66.1.4. Correlation Keys When working with processes, you might need to assign a business identifier to a process instance and then use the identifier to reference the instance without storing the generated instance ID. To provide such capabilities, the process engine uses the CorrelationKey interface, which can define CorrelationProperties . A class that implements CorrelationKey can have either a single property describing it or a multi-property set. The value of the property or a combination of values of several properties refers to a unique instance. The KieSession class implements the CorrelationAwareProcessRuntime interface to support correlation capabilities. This interface exposes the following methods: Methods of the CorrelationAwareProcessRuntime interface /** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. You can pass parameters * to the process instance (as name-value pairs), and these parameters set * variables of the process instance. * * @param processId the ID of the process to start * @param correlationKey custom correlation key that can be used to identify the process instance * @param parameters the process variables to set when starting the process instance * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Create a new process instance (but do not yet start it). Use the process * (definition) that is referenced by the given process ID. * You can pass to the process instance (as name-value pairs), * and these parameters set variables of the process instance. * Use this method if you need a reference to the process instance before actually * starting it. Otherwise, use startProcess. * * @param processId the ID of the process to start * @param correlationKey custom correlation key that can be used to identify the process instance * @param parameters the process variables to set when creating the process instance * @return the ProcessInstance that represents the instance of the process that was created (but not yet started) */ ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Returns the process instance with the given correlationKey. Note that only active process instances * are returned. If a process instance has been completed already, this method will return * null. * * @param correlationKey the custom correlation key assigned when the process instance was created * @return the process instance identified by the key or null if it cannot be found */ ProcessInstance getProcessInstance(CorrelationKey correlationKey); Correlation is usually used with long-running processes. You must enable persistence if you want to store correlation information permanently. 66.2. Runtime manager The RuntimeManager class provides a layer in the process engine API that simplifies and empowers its usage. This class encapsulates and manages the KIE base and KIE session, as well as the task service that provides handlers for all tasks in the process. The KIE session and the task service within the runtime manager are already configured to work with each other and you do not need to provide such configuration. For example, you do not need to register a human task handler and to ensure that it is connected to the required service. The runtime manager manages the KIE session according to a predefined strategy. The following strategies are available: Singleton : The runtime manager maintains a single KieSession and uses it for all the requested processes. Per Request : The runtime manager creates a new KieSession for every request. Per Process Instance : The runtime manager maintains mapping between process instance and KieSession and always provides the same KieSession whenever working with a given process instance. Regardless of the strategy, the RuntimeManager class ensures the same capabilities in initialization and configuration of the process engine components: KieSession instances are loaded with the same factories (either in memory or JPA based). Work item handlers are registered on every KieSession instance (either loaded from the database or newly created). Event listeners ( Process , Agenda , WorkingMemory ) are registered on every KIE session, whether the session is loaded from the database or newly created. The task service is configured with the following required components: The JTA transaction manager The same entity manager factory as the one used for KieSession instances The UserGroupCallback instance that can be configured in the environment The runtime manager also enables disposing the process engine cleanly. It provides dedicated methods to dispose a RuntimeEngine instance when it is no longer needed, releasing any resources it might have acquired. The following code shows the definition of the RuntimeManager interface: Definition of the RuntimeManager interface public interface RuntimeManager { /** * Returns a <code>RuntimeEngine</code> instance that is fully initialized: * <ul> * <li>KieSession is created or loaded depending on the strategy</li> * <li>TaskService is initialized and attached to the KIE session (through a listener)</li> * <li>WorkItemHandlers are initialized and registered on the KIE session</li> * <li>EventListeners (process, agenda, working memory) are initialized and added to the KIE session</li> * </ul> * @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code> * @return instance of the <code>RuntimeEngine</code> */ RuntimeEngine getRuntimeEngine(Context<?> context); /** * Unique identifier of the <code>RuntimeManager</code> * @return */ String getIdentifier(); /** * Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact. * This method should always be used to dispose <code>RuntimeEngine</code> that is not needed * anymore. <br/> * Do not use KieSession.dispose() used with RuntimeManager as it will break the internal * mechanisms of the manager responsible for clear and efficient disposal.<br/> * Disposing is not needed if <code>RuntimeEngine</code> was obtained within an active JTA transaction, * if the getRuntimeEngine method was invoked during active JTA transaction, then disposing of * the runtime engine will happen automatically on transaction completion. * @param runtime */ void disposeRuntimeEngine(RuntimeEngine runtime); /** * Closes <code>RuntimeManager</code> and releases its resources. Call this method when * a runtime manager is not needed anymore. Otherwise it will still be active and operational. */ void close(); } The RuntimeManager class also provides the RuntimeEngine class, which includes methods to get access to underlying process engine components: Definition of the RuntimeEngine interface public interface RuntimeEngine { /** * Returns the <code>KieSession</code> configured for this <code>RuntimeEngine</code> * @return */ KieSession getKieSession(); /** * Returns the <code>TaskService</code> configured for this <code>RuntimeEngine</code> * @return */ TaskService getTaskService(); } Note An identifier of the RuntimeManager class is used as deploymentId during runtime execution. For example, the identifier is persisted as deploymentId of a Task when the Task is persisted. The deploymentID of a Task associates it with the RuntimeManager when the Task is completed and the process instance is resumed. The same deploymentId is also persisted as externalId in history log tables. If you don't specify an identifier when creating a RuntimeManager instance, a default value is applied, depending on the strategy (for example, default-per-pinstance for PerProcessInstanceRuntimeManager ). That means your application uses the same deployment of the RuntimeManager class in its entire lifecycle. If you maintain multiple runtime managers in your application, you must specify a unique identifier for every RuntimeManager instance. For example, the deployment service maintains multiple runtime managers and uses the GAV value of the KJAR file as an identifier. The same logic is used in Business Central and in KIE Server, because they depend on the deployment service. Note When you need to interact with the process engine or task service from within a handler or a listener, you can use the RuntimeManager interface to retrieve the RuntimeEngine instance for the given process instance, and then use the RuntimeEngine instance to retrieve the KieSession or TaskService instance. This approach ensures that the proper state of the engine, managed according to the selected strategy, is preserved. 66.2.1. Runtime manager strategies The RuntimeManager class supports the following strategies for managing KIE sessions. Singleton strategy This strategy instructs the runtime manager to maintain a single RuntimeEngine instance (and in turn single KieSession and TaskService instances). Access to the runtime engine is synchronized and, therefore, thread safe, although it comes with a performance penalty due to synchronization. Use this strategy for simple use cases. This strategy has the following characteristics: It has a small memory footprint, with single instances of the runtime engine and the task service. It is simple and compact in design and usage. It is a good fit for low-to-medium load on the process engine because of synchronized access. In this strategy, because of the single KieSession instance, all state objects (such as facts) are directly visible to all process instances and vice versa. The strategy is not contextual. When you retrieve instances of RuntimeEngine from a singleton RuntimeManager , you do not need to take the Context instance into account. Usually, you can use EmptyContext.get() as the context, although a null argument is acceptable as well. In this strategy, the runtime manager keeps track of the ID of the KieSession , so that the same session remains in use after a RuntimeManager restart. The ID is stored as a serialized file in a temporary location in the file system that, depending on the environment, can be one of the following directories: The value of the jbpm.data.dir system property The value of the jboss.server.data.dir system property The value of the java.io.tmpdir system property Warning A combination of the Singleton strategy and the EJB Timer Scheduler might raise Hibernate issues under load. Do not use this combination in production applications. The EJB Timer Scheduler is the default scheduler in KIE Server. Per request strategy This strategy instructs the runtime manager to provide a new instance of RuntimeEngine for every request. One or more invocations of the process engine within a single transaction are considered a single request. The same instance of RuntimeEngine must be used within a single transaction to ensure correctness of state. Otherwise, an operation completed in one call would not be visible in the call. This strategy is stateless, as process state is preserved only within the request. When a request is completed, the RuntimeEngine instance is permanently destroyed. If persistence is used, information related to the KIE session is removed from the persistence database as well. This strategy has the following characteristics: It provides completely isolated process engine and task service operations for every request. It is completely stateless, because facts are stored only for the duration of the request. It is a good fit for high-load, stateless processes, where no facts or timers must be preserved between requests. In this strategy, the KIE session is only available during the life of a request and is destroyed at the end of the request. The strategy is not contextual. When you retrieve instances of RuntimeEngine from a per-request RuntimeManager , you do not need to take the Context instance into account. Usually, you can use EmptyContext.get() as the context, although a null argument is acceptable as well. Per process instance strategy This strategy instructs RuntimeManager to maintain a strict relationship between a KIE session and a process instance. Each KieSession is available as long as the ProcessInstance to which it belongs is active. This strategy provides the most flexible approach for using advanced capabilities of the process engine, such as rule evaluation and isolation between process instances. It maximizes performance and reduces potential bottlenecks introduced by synchronization. At the same time, unlike the request strategy, it reduces the number of KIE sessions to the actual number of process instances, rather than the total number of requests. This strategy has the following characteristics: It provides isolation for every process instance. It maintains a strict relationship between KieSession and ProcessInstance to ensure that it always delivers the same KieSession for a given ProcessInstance . It merges the lifecycle of KieSession with ProcessInstance , and both are disposed when the process instance completes or aborts. It enables maintenance of data, such as facts and timers, in the scope of the process instance. Only the process instance has access to the data. It introduces some overhead because of the need to look up and load the KieSession for the process instance. It validates every usage of a KieSession so it cannot be used for other process instances. An exception is thrown if another process instance uses the same KieSession . The strategy is contextual and accepts the following context instances: EmptyContext or null: Used when starting a process instance because no process instance ID is available yet ProcessInstanceIdContext : Used after the process instance is created CorrelationKeyContext : Used as an alternative to ProcessInstanceIdContext to use a custom (business) key instead of the process instance ID 66.2.2. Typical usage scenario for the runtime manager The typical usage scenario for the runtime manager consists of the following stages: At application startup time, complete the following stage: Build a RuntimeManager instance and keep it for the entire lifetime of the application, as it is thread-safe and can be accessed concurrently. At request time, complete the following stages: Get RuntimeEngine from the RuntimeManager , using the proper context instance as determined by the strategy that you configured for the RuntimeManager class. Get the KieSession and TaskService objects from the RuntimeEngine . Use the KieSession and TaskService objects for operations such as startProcess or completeTask . After completing processing, dispose RuntimeEngine using the RuntimeManager.disposeRuntimeEngine method. At application shutdown time, complete the following stage: Close the RuntimeManager instance. Note When RuntimeEngine is obtained from RuntimeManager within an active JTA transaction, you do not need to dispose RuntimeEngine at the end, as RuntimeManager automatically disposes the RuntimeEngine on transaction completion (regardless of the completion status: commit or rollback). The following example shows how you can build a RuntimeManager instance and get a RuntimeEngine instance (that encapsulates KieSession and TaskService classes) from it: Building a RuntimeManager instance and then getting RuntimeEngine and KieSession // First, configure the environment to be used by RuntimeManager RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultInMemoryBuilder() .addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2) .get(); // , create the RuntimeManager - in this case the singleton strategy is chosen RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment); // Then get RuntimeEngine from the runtime manager, using an empty context because singleton does not keep track // of runtime engine as there is only one RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get()); // Get the KieSession from the RuntimeEngine - already initialized with all handlers, listeners, and other requirements // configured on the environment KieSession ksession = runtimeEngine.getKieSession(); // Add invocations of the process engine here, // for example, ksession.startProcess(processId); // Finally, dispose the runtime engine manager.disposeRuntimeEngine(runtimeEngine); This example provides the simplest, or minimal, way of using RuntimeManager and RuntimeEngine classes. It has the following characteristics: The KieSession instance is created in memory, using the newDefaultInMemoryBuilder builder. A single process, which is added as an asset, is available for execution. The TaskService class is configured and attached to the KieSession instance through the LocalHTWorkItemHandler interface to support user task capabilities within processes. 66.2.3. Runtime environment configuration object The RuntimeManager class encapsulates internal process engine complexity, such as creating, disposing, and registering handlers. It also provides fine-grained control over process engine configuration. To set this configuration, you must create a RuntimeEnvironment object and then use it to create the RuntimeManager object. The following definition shows the methods available in the RuntimeEnvironment interface: Methods in the RuntimeEnvironment interface public interface RuntimeEnvironment { /** * Returns <code>KieBase</code> that is to be used by the manager * @return */ KieBase getKieBase(); /** * KieSession environment that is to be used to create instances of <code>KieSession</code> * @return */ Environment getEnvironment(); /** * KieSession configuration that is to be used to create instances of <code>KieSession</code> * @return */ KieSessionConfiguration getConfiguration(); /** * Indicates if persistence is to be used for the KieSession instances * @return */ boolean usePersistence(); /** * Delivers a concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners * that is to be registered on instances of <code>KieSession</code> * @return */ RegisterableItemsFactory getRegisterableItemsFactory(); /** * Delivers a concrete implementation of <code>UserGroupCallback</code> that is to be registered on instances * of <code>TaskService</code> for managing users and groups. * @return */ UserGroupCallback getUserGroupCallback(); /** * Delivers a custom class loader that is to be used by the process engine and task service instances * @return */ ClassLoader getClassLoader(); /** * Closes the environment, permitting closing of all dependent components such as ksession factories */ void close(); 66.2.4. Runtime environment builder To create an instance of RuntimeEnvironment that contains the required data, use the RuntimeEnvironmentBuilder class. This class provides a fluent API to configure a RuntimeEnvironment instance with predefined settings. The following definition shows the methods in the RuntimeEnvironmentBuilder interface: Methods in the RuntimeEnvironmentBuilder interface public interface RuntimeEnvironmentBuilder { public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled); public RuntimeEnvironmentBuilder entityManagerFactory(Object emf); public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type); public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value); public RuntimeEnvironmentBuilder addConfiguration(String name, String value); public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase); public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback); public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory); public RuntimeEnvironment get(); public RuntimeEnvironmentBuilder classLoader(ClassLoader cl); public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler); Use the RuntimeEnvironmentBuilderFactory class to obtain instances of RuntimeEnvironmentBuilder . Along with empty instances with no settings, you can get builders with several preconfigured sets of configuration options for the runtime manager. The following definition shows the methods in the RuntimeEnvironmentBuilderFactory interface: Methods in the RuntimeEnvironmentBuilderFactory interface public interface RuntimeEnvironmentBuilderFactory { /** * Provides a completely empty <code>RuntimeEnvironmentBuilder</code> instance to manually * set all required components instead of relying on any defaults. * @return new instance of <code>RuntimeEnvironmentBuilder</code> */ public RuntimeEnvironmentBuilder newEmptyBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * but does not have persistence for the process engine configured so it will only store process instances in memory * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files * @param groupId group id of kjar * @param artifactId artifact id of kjar * @param version version number of kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the kbase and ksession settings in the KJAR * @param groupId group id of kjar * @param artifactId artifact id of kjar * @param version version number of kjar * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar * @param ksessionName name of the ksession define in kmodule.xml stored in kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the release ID defined in the KJAR * @param releaseId <code>ReleaseId</code> that described the kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the kbase, ksession, and release ID settings in the KJAR * @param releaseId <code>ReleaseId</code> that described the kjar * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar * @param ksessionName name of the ksession define in kmodule.xml stored in kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which * defines the kjar itself. * Expects to use default kbase and ksession from kmodule. * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which * defines the kjar itself. * @param kbaseName name of the kbase defined in kmodule.xml * @param ksessionName name of the ksession define in kmodule.xml * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName); The runtime manager also provides access to a TaskService object as an integrated component of a RuntimeEngine object, configured to communicate with the KIE session. If you use one of the default builders, the following configuration settings for the task service are present: The persistence unit name is set to org.jbpm.persistence.jpa (for both process engine and task service). The human task handler is registered on the KIE session. The JPA-based history log event listener is registered on the KIE session. An event listener to trigger rule task evaluation ( fireAllRules ) is registered on the KIE session. 66.2.5. Registration of handlers and listeners for runtime engines If you use the runtime manager API, the runtime engine object represents the process engine. To extend runtime engines with your own handlers or listeners, you can implement the RegisterableItemsFactory interface and then include it in the runtime environment using the RuntimeEnvironmentBuilder.registerableItemsFactory() method. Then the runtime manager automatically adds the handlers or listeners to every runtime engine it creates. The following definition shows the methods in the RegisterableItemsFactory interface: Methods in the RegisterableItemsFactory interface /** * Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally * @return map of handlers to be registered - in case of no handlers empty map shall be returned. */ Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime); /** * Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime); /** * Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime); /** * Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime); The process engine provides default implementations of RegisterableItemsFactory . You can extend these implementations to define custom handlers and listeners. The following available implementations might be useful: org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory : The simplest possible implementation. It does not have any predefined content and uses reflection to produce instances of handlers and listeners based on given class names. org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory : An extension of the Simple implementation that introduces the same defaults as the default runtime environment builder and still provides the same capabilities as the Simple implementation. org.jbpm.runtime.manager.impl.cdi.InjectableRegisterableItemsFactory : An extension of the Default implementation that is tailored for CDI environments and provides a CDI style approach to finding handlers and listeners using producers. 66.2.5.1. Registering work item handlers using a file You can register simple work item handlers, which are stateless or rely on the KieSession state, by defining them in the CustomWorkItem.conf file and placing the file on the class path. Procedure Create a file named drools.session.conf in the META-INF subdirectory of the root of the class path. For web applications the directory is WEB-INF/classes/META-INF . Add the following line to the drools.session.conf file: Create a file named CustomWorkItemHandlers.conf in the same directory. In the CustomWorkItemHandlers.conf file, define custom work item handlers using the MVEL style, similar to the following example: Result The work item handlers that you listed are registered for any KIE session created by the application, regardless of whether the application uses the runtime manager API. 66.2.5.2. Registration of handlers and listeners in a CDI environment If your application uses the runtime manager API and runs in a CDI environment, your classes can implement the dedicated producer interfaces to provide custom work item handlers and event listeners to all runtime engines. To create a work item handler, you must implement the WorkItemHandlerProducer interface. Definition of the WorkItemHandlerProducer interface public interface WorkItemHandlerProducer { /** * Returns a map of work items (key = work item name, value= work item handler instance) * to be registered on the KieSession * <br/> * The following parameters are accepted: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return map of work item handler instances (recommendation is to always return new instances when this method is invoked) */ Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params); } To create an event listener, you must implement the EventListenerProducer interface. Annotate the event listener producer with the proper qualifier to indicate the type of listeners that it provides. Use one of the following annotations: @Process for ProcessEventListener @Agenda for AgendaEventListener @WorkingMemory for WorkingMemoryEventListener Definition of the EventListenerProducer interface public interface EventListenerProducer<T> { /** * Returns a list of instances for given (T) type of listeners * <br/> * The following parameters are accepted: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return list of listener instances (recommendation is to always return new instances when this method is invoked) */ List<T> getEventListeners(String identifier, Map<String, Object> params); } Package your implementations of these interfaces as a bean archive by including beans.xml in the META-INF subdirectory. Place the bean archive on the application class path, for example, in WEB-INF/lib for a web application. The CDI-based runtime manager discovers the packages and registers the work item handlers and event listeners in every KieSession that it creates or loads from the data store. The process engine provides certain parameters to the producers to enable stateful and advanced operation. For example, the handlers or listeners can use the parameters to signal the process engine or the process instance in case of an error. The process engine provides the following components as parameters: KieSession TaskService RuntimeManager In addition, the identifier of the RuntimeManager class instance is provided as a parameter. You can apply filtering to the identifier to decide whether this RuntimeManager instance receives the handlers and listeners. 66.3. Services in the process engine The process engine provides a set of high-level services, running on top of the runtime manager API. The services provide the most convenient way to embed the process engine in your application. KIE Server also uses these services internally. When you use services, you do not need to implement your own handling of the runtime manager, runtime engines, sessions, and other process engine entities. However, you can access the underlying RuntimeManager objects through the services when necessary. Note If you use the EJB remote client for the services API, the RuntimeManager objects are not available, because they would not operate correctly on the client side after serialization. 66.3.1. Modules for process engine services The process engine services are provided as a set of modules. These modules are grouped by their framework dependencies. You can choose the suitable modules and use only these modules, without making your application dependent on the frameworks that other modules use. The following modules are available: jbpm-services-api : Only API classes and interfaces jbpm-kie-services : A code implementation of the services API in pure Java without any framework dependencies jbpm-services-cdi : A CDI wrapper on top of the core services implementation jbpm-services-ejb-api : An extension of the services API to support EJB requirements jbpm-services-ejb-impl : EJB wrappers on top of the core services implementation jbpm-services-ejb-timer : A scheduler service based on the EJB timer service to support time-based operations, such as timer events and deadlines jbpm-services-ejb-client : An EJB remote client implementation, currently supporting only Red Hat JBoss EAP 66.3.2. Deployment service The deployment service deploys and undeploys units in the process engine. A deployment unit represents the contents of a KJAR file. A deployment unit includes business assets, such as process definitions, rules, forms, and data models. After deploying the unit you can execute the processes it defines. You can also query the available deployment units. Every deployment unit has a unique identifier string, deploymentId , also known as deploymentUnitId . You can use this identifier to apply any service actions to the deployment unit. In a typical use case for this service, you can load and unload multiple KJARs at the same time and, when necessary, execute processes simultaneously. The following code sample shows simple use of the deployment service. Using the deployment service // Create deployment unit by providing the GAV of the KJAR DeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION); // Get the deploymentId for the deployed unit String deploymentId = deploymentUnit.getIdentifier(); // Deploy the unit deploymentService.deploy(deploymentUnit); // Retrieve the deployed unit DeployedUnit deployed = deploymentService.getDeployedUnit(deploymentId); // Get the runtime manager RuntimeManager manager = deployed.getRuntimeManager(); The following definition shows the complete DeploymentService interface: Definition of the DeploymentService interface public interface DeploymentService { void deploy(DeploymentUnit unit); void undeploy(DeploymentUnit unit); RuntimeManager getRuntimeManager(String deploymentUnitId); DeployedUnit getDeployedUnit(String deploymentUnitId); Collection<DeployedUnit> getDeployedUnits(); void activate(String deploymentId); void deactivate(String deploymentId); boolean isDeployed(String deploymentUnitId); } 66.3.3. Definition service When you deploy a process definition using the deployment service, the definition service automatically scans the definition, parses the process, and extracts the information that the process engine requires. You can use the definition service API to retrieve information about the process definition. The service extracts this information directly from the BPMN2 process definition. The following information is available: Process definition such as ID, name, and description Process variables including the name and type of every variable Reusable sub-processes used in the process (if any) Service tasks that represent domain-specific activities User tasks including assignment information Task data with input and output information The following code sample shows simple use of the definition service. The processID must correspond to the ID of a process definition in a KJAR file that you already deployed using the deployment service. Using the definition service String processId = "org.jbpm.writedocument"; Collection<UserTaskDefinition> processTasks = bpmn2Service.getTasksDefinitions(deploymentUnit.getIdentifier(), processId); Map<String, String> processData = bpmn2Service.getProcessVariables(deploymentUnit.getIdentifier(), processId); Map<String, String> taskInputMappings = bpmn2Service.getTaskInputMappings(deploymentUnit.getIdentifier(), processId, "Write a Document" ); You can also use the definition service to scan a definition that you provide as BPMN2-compliant XML content, without the use of a KJAR file. The buildProcessDefinition method provides this capability. The following definition shows the complete DefinitionService interface: Definition of the DefinitionService interface public interface DefinitionService { ProcessDefinition buildProcessDefinition(String deploymentId, String bpmn2Content, ClassLoader classLoader, boolean cache) throws IllegalArgumentException; ProcessDefinition getProcessDefinition(String deploymentId, String processId); Collection<String> getReusableSubProcesses(String deploymentId, String processId); Map<String, String> getProcessVariables(String deploymentId, String processId); Map<String, String> getServiceTasks(String deploymentId, String processId); Map<String, Collection<String>> getAssociatedEntities(String deploymentId, String processId); Collection<UserTaskDefinition> getTasksDefinitions(String deploymentId, String processId); Map<String, String> getTaskInputMappings(String deploymentId, String processId, String taskName); Map<String, String> getTaskOutputMappings(String deploymentId, String processId, String taskName); } 66.3.4. Process service The deployment and definition services prepare process data in the process engine. To execute processes based on this data, use the process service. The process service supports interaction with the process engine execution environment, including the following actions: Starting a new process instance Running a process as a single transaction Working with an existing process instance, for example, signalling events, getting information details, and setting values of variables Working with work items The process service is also a command executor. You can use it to execute commands on the KIE session to extend its capabilities. Important The process service is optimized for runtime operations. Use it when you need to run a process or to alter a process instance, for example, signal events or change variables. For read operations, for example, showing available process instances, use the runtime data service. The following code sample shows deploying and running a process: Deploying and runing a process using the deployment and process services KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION); deploymentService.deploy(deploymentUnit); long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "customtask"); ProcessInstance pi = processService.getProcessInstance(processInstanceId); The startProcess method expects deploymentId as the first argument. Using this argument, you can start processes in a certain deployment when your application might have multiple deployments. For example, you might deploy different versions of the same process from different KJAR files. You can then start the required version using the correct deploymentId . The following definition shows the complete ProcessService interface: Definition of the ProcessService interface public interface ProcessService { /** * Starts a process with no variables * * @param deploymentId deployment identifier * @param processId process identifier * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId); /** * Starts a process and sets variables * * @param deploymentId deployment identifier * @param processId process identifier * @param params process variables * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, Map<String, Object> params); /** * Starts a process with no variables and assigns a correlation key * * @param deploymentId deployment identifier * @param processId process identifier * @param correlationKey correlation key to be assigned to the process instance - must be unique * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey); /** * Starts a process, sets variables, and assigns a correlation key * * @param deploymentId deployment identifier * @param processId process identifier * @param correlationKey correlation key to be assigned to the process instance - must be unique * @param params process variables * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey, Map<String, Object> params); /** * Run a process that is designed to start and finish in a single transaction. * This method starts the process and returns when the process completes. * It returns the state of process variables at the outcome of the process * * @param deploymentId deployment identifier for the KJAR file of the process * @param processId process identifier * @param params process variables * @return the state of process variables at the end of the process */ Map<String, Object> computeProcessOutcome(String deploymentId, String processId, Map<String, Object> params); /** * Starts a process at the listed nodes, instead of the normal starting point. * This method can be used for restarting a process that was aborted. However, * it does not restore the context of a process instance. You must * supply all necessary variables when calling this method. * This method does not guarantee that the process is started in a valid state. * * @param deploymentId deployment identifier * @param processId process identifier * @param params process variables * @param nodeIds list of BPMN node identifiers where the process must start * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcessFromNodeIds(String deploymentId, String processId, Map<String, Object> params, String... nodeIds); /** * Starts a process at the listed nodes, instead of the normal starting point, * and assigns a correlation key. * This method can be used for restarting a process that was aborted. However, * it does not restore the context of a process instance. You must * supply all necessary variables when calling this method. * This method does not guarantee that the process is started in a valid state. * * @param deploymentId deployment identifier * @param processId process identifier * @param key correlation key (must be unique) * @param params process variables * @param nodeIds list of BPMN node identifiers where the process must start. * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcessFromNodeIds(String deploymentId, String processId, CorrelationKey key, Map<String, Object> params, String... nodeIds); /** * Aborts the specified process * * @param processInstanceId process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstance(Long processInstanceId); /** * Aborts the specified process * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstance(String deploymentId, Long processInstanceId); /** * Aborts all specified processes * * @param processInstanceIds list of process instance unique identifiers * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstances(List<Long> processInstanceIds); /** * Aborts all specified processes * * @param deploymentId deployment to which the process instance belongs * @param processInstanceIds list of process instance unique identifiers * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstances(String deploymentId, List<Long> processInstanceIds); /** * Signals an event to a single process instance * * @param processInstanceId the process instance unique identifier * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstance(Long processInstanceId, String signalName, Object event); /** * Signals an event to a single process instance * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId the process instance unique identifier * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstance(String deploymentId, Long processInstanceId, String signalName, Object event); /** * Signal an event to a list of process instances * * @param processInstanceIds list of process instance unique identifiers * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstances(List<Long> processInstanceIds, String signalName, Object event); /** * Signal an event to a list of process instances * * @param deploymentId deployment to which the process instances belong * @param processInstanceIds list of process instance unique identifiers * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstances(String deploymentId, List<Long> processInstanceIds, String signalName, Object event); /** * Signal an event to a single process instance by correlation key * * @param correlationKey the unique correlation key of the process instance * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found */ void signalProcessInstanceByCorrelationKey(CorrelationKey correlationKey, String signalName, Object event); /** * Signal an event to a single process instance by correlation key * * @param deploymentId deployment to which the process instance belongs * @param correlationKey the unique correlation key of the process instance * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found */ void signalProcessInstanceByCorrelationKey(String deploymentId, CorrelationKey correlationKey, String signalName, Object event); /** * Signal an event to given list of correlation keys * * @param correlationKeys list of unique correlation keys of process instances * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found */ void signalProcessInstancesByCorrelationKeys(List<CorrelationKey> correlationKeys, String signalName, Object event); /** * Signal an event to given list of correlation keys * * @param deploymentId deployment to which the process instances belong * @param correlationKeys list of unique correlation keys of process instances * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found */ void signalProcessInstancesByCorrelationKeys(String deploymentId, List<CorrelationKey> correlationKeys, String signalName, Object event); /** * Signal an event to a any process instance that listens to a given signal and belongs to a given deployment * * @param deployment identifier of the deployment * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found */ void signalEvent(String deployment, String signalName, Object event); /** * Returns process instance information. Will return null if no * active process with the ID is found * * @param processInstanceId The process instance unique identifier * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(Long processInstanceId); /** * Returns process instance information. Will return null if no * active process with the ID is found * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(String deploymentId, Long processInstanceId); /** * Returns process instance information. Will return null if no * active process with that correlation key is found * * @param correlationKey correlation key assigned to the process instance * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(CorrelationKey correlationKey); /** * Returns process instance information. Will return null if no * active process with that correlation key is found * * @param deploymentId deployment to which the process instance belongs * @param correlationKey correlation key assigned to the process instance * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(String deploymentId, CorrelationKey correlationKey); /** * Sets a process variable. * @param processInstanceId The process instance unique identifier * @param variableId The variable ID to set * @param value The variable value * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariable(Long processInstanceId, String variableId, Object value); /** * Sets a process variable. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @param variableId The variable id to set. * @param value The variable value. * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariable(String deploymentId, Long processInstanceId, String variableId, Object value); /** * Sets process variables. * * @param processInstanceId The process instance unique identifier * @param variables map of process variables (key = variable name, value = variable value) * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariables(Long processInstanceId, Map<String, Object> variables); /** * Sets process variables. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @param variables map of process variables (key = variable name, value = variable value) * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariables(String deploymentId, Long processInstanceId, Map<String, Object> variables); /** * Gets a process instance variable. * * @param processInstanceId the process instance unique identifier * @param variableName the variable name to get from the process * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Object getProcessInstanceVariable(Long processInstanceId, String variableName); /** * Gets a process instance variable. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId the process instance unique identifier * @param variableName the variable name to get from the process * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Object getProcessInstanceVariable(String deploymentId, Long processInstanceId, String variableName); /** * Gets a process instance variable values. * * @param processInstanceId The process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Map<String, Object> getProcessInstanceVariables(Long processInstanceId); /** * Gets a process instance variable values. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Map<String, Object> getProcessInstanceVariables(String deploymentId, Long processInstanceId); /** * Returns all signals available in current state of given process instance * * @param processInstanceId process instance ID * @return list of available signals or empty list if no signals are available */ Collection<String> getAvailableSignals(Long processInstanceId); /** * Returns all signals available in current state of given process instance * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID * @return list of available signals or empty list if no signals are available */ Collection<String> getAvailableSignals(String deploymentId, Long processInstanceId); /** * Completes the specified WorkItem with the given results * * @param id workItem ID * @param results results of the workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void completeWorkItem(Long id, Map<String, Object> results); /** * Completes the specified WorkItem with the given results * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @param results results of the workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void completeWorkItem(String deploymentId, Long processInstanceId, Long id, Map<String, Object> results); /** * Abort the specified workItem * * @param id workItem ID * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void abortWorkItem(Long id); /** * Abort the specified workItem * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void abortWorkItem(String deploymentId, Long processInstanceId, Long id); /** * Returns the specified workItem * * @param id workItem ID * @return The specified workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ WorkItem getWorkItem(Long id); /** * Returns the specified workItem * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @return The specified workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ WorkItem getWorkItem(String deploymentId, Long processInstanceId, Long id); /** * Returns active work items by process instance ID. * * @param processInstanceId process instance ID * @return The list of active workItems for the process instance * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ List<WorkItem> getWorkItemByProcessInstance(Long processInstanceId); /** * Returns active work items by process instance ID. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID * @return The list of active workItems for the process instance * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ List<WorkItem> getWorkItemByProcessInstance(String deploymentId, Long processInstanceId); /** * Executes the provided command on the underlying command executor (usually KieSession) * @param deploymentId deployment identifier * @param command actual command for execution * @return results of the command execution * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process) */ public <T> T execute(String deploymentId, Command<T> command); /** * Executes the provided command on the underlying command executor (usually KieSession) * @param deploymentId deployment identifier * @param context context implementation to be used to get the runtime engine * @param command actual command for execution * @return results of the command execution * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process) */ public <T> T execute(String deploymentId, Context<?> context, Command<T> command); } 66.3.5. Runtime Data Service You can use the runtime data service to retrieve all runtime information about processes, such as started process instances and executed node instances. For example, you can build a list-based UI to show process definitions, process instances, tasks for a given user, and other data, based on information provided by the runtime data service. This service is optimized to be as efficient as possible while providing all required information. The following examples show various usage of this service. Retrieving all process definitions Collection definitions = runtimeDataService.getProcesses(new QueryContext()); Retrieving active process instances Collection<processinstancedesc> instances = runtimeDataService.getProcessInstances(new QueryContext()); Retrieving active nodes for a particular process instance Collection<nodeinstancedesc> instances = runtimeDataService.getProcessInstanceHistoryActive(processInstanceId, new QueryContext()); Retrieving tasks assigned to the user john List<tasksummary> taskSummaries = runtimeDataService.getTasksAssignedAsPotentialOwner("john", new QueryFilter(0, 10)); The runtime data service methods support two important parameters, QueryContext and QueryFilter . QueryFilter is an extension of QueryContext . You can use these parameters to manage the result set, providing pagination, sorting, and ordering. You can also use them to apply additional filtering when searching for user tasks. The following definition shows the methods of the RuntimeDataService interface: Definition of the RuntimeDataService interface public interface RuntimeDataService { /** * Represents type of node instance log entries * */ enum EntryType { START(0), END(1), ABORTED(2), SKIPPED(3), OBSOLETE(4), ERROR(5); } // Process instance information /** * Returns a list of process instance descriptions * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the available process instances */ Collection<ProcessInstanceDesc> getProcessInstances(QueryContext queryContext); /** * Returns a list of all process instance descriptions with the given statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param initiator the initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states and initiator) */ Collection<ProcessInstanceDesc> getProcessInstances(List<Integer> states, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process ID and statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param processId ID of the {@link Process} (definition) used when starting the process instance * @param initiator initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states, processId, and initiator) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessId(List<Integer> states, String processId, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process name and statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param processName name (not ID) of the {@link Process} (definition) used when starting the process instance * @param initiator initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states, processName and initiator) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessName(List<Integer> states, String processName, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given deployment ID and statuses * @param deploymentId deployment ID of the runtime * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByDeploymentId(String deploymentId, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions found for the given processInstanceId. If no descriptions are found, null is returned. At the same time, the method * fetches all active tasks (in status: Ready, Reserved, InProgress) to provide the information about what user task is keeping each instance * and who owns the task (if the task is already claimed by a user) * @param processInstanceId ID of the process instance to be fetched * @return process instance information, in the form of a {@link ProcessInstanceDesc} instance */ ProcessInstanceDesc getProcessInstanceById(long processInstanceId); /** * Returns the active process instance description found for the given correlation key. If none is found, returns null. At the same time it * fetches all active tasks (in status: Ready, Reserved, InProgress) to provide information about which user task is keeping each instance * and who owns the task (if the task is already claimed by a user) * @param correlationKey correlation key assigned to the process instance * @return process instance information, in the form of a {@link ProcessInstanceDesc} instance */ ProcessInstanceDesc getProcessInstanceByCorrelationKey(CorrelationKey correlationKey); /** * Returns process instances descriptions (regardless of their states) found for the given correlation key. If no descriptions are found, an empty list is returned * This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching * is performed based on a 'starts with' criterion * @param correlationKey correlation key assigned to the process instance * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given correlation key */ Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKey(CorrelationKey correlationKey, QueryContext queryContext); /** * Returns process instance descriptions, filtered by their states, that were found for the given correlation key. If none are found, returns an empty list * This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching * is performed based on a 'starts with' criterion * @param correlationKey correlation key assigned to process instance * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given correlation key */ Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKeyAndStatus(CorrelationKey correlationKey, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process definition ID * @param processDefId ID of the process definition * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process definition ID, filtered by state * @param processDefId ID of the process definition * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions that match process instances that have the given variable defined, filtered by state * @param variableName name of the variable that process instance should have * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined */ Collection<ProcessInstanceDesc> getProcessInstancesByVariable(String variableName, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions that match process instances that have the given variable defined and the value of the variable matches the given variableValue * @param variableName name of the variable that process instance should have * @param variableValue value of the variable to match * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined with the given value */ Collection<ProcessInstanceDesc> getProcessInstancesByVariableAndValue(String variableName, String variableValue, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions that have the specified parent * @param parentProcessInstanceId ID of the parent process instance * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the available process instances */ Collection<ProcessInstanceDesc> getProcessInstancesByParent(Long parentProcessInstanceId, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions that are subprocesses of the specified process, or subprocesses of those subprocesses, and so on. The list includes the full hierarchy of subprocesses under the specified parent process * @param processInstanceId ID of the parent process instance * @return list of {@link ProcessInstanceDesc} instances representing the full hierarchy of this process */ Collection<ProcessInstanceDesc> getProcessInstancesWithSubprocessByProcessInstanceId(Long processInstanceId, List<Integer> states, QueryContext queryContext); // Node and Variable instance information /** * Returns the active node instance descriptor for the given work item ID, if the work item exists and is active * @param workItemId identifier of the work item * @return NodeInstanceDesc for work item if it exists and is still active, otherwise null is returned */ NodeInstanceDesc getNodeInstanceForWorkItem(Long workItemId); /** * Returns a trace of all active nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return */ Collection<NodeInstanceDesc> getProcessInstanceHistoryActive(long processInstanceId, QueryContext queryContext); /** * Returns a trace of all executed (completed) nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return */ Collection<NodeInstanceDesc> getProcessInstanceHistoryCompleted(long processInstanceId, QueryContext queryContext); /** * Returns a complete trace of all executed (completed) and active nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return {@link NodeInstance} information, in the form of a list of {@link NodeInstanceDesc} instances, * that come from a process instance that matches the given criteria (deploymentId, processId) */ Collection<NodeInstanceDesc> getProcessInstanceFullHistory(long processInstanceId, QueryContext queryContext); /** * Returns a complete trace of all events of the given type (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR) for the given process instance * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @param type type of events to be returned (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR). To return all events, use {@link #getProcessInstanceFullHistory(long, QueryContext)} * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getProcessInstanceFullHistoryByType(long processInstanceId, EntryType type, QueryContext queryContext); /** * Returns a trace of all nodes for the given node types and process instance ID * @param processInstanceId unique identifier of the process instance * @param nodeTypes list of node types to filter nodes of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getNodeInstancesByNodeType(long processInstanceId, List<String> nodeTypes, QueryContext queryContext); /** * Returns a trace of all nodes for the given node types and correlation key * @param correlationKey correlation key * @param states list of states * @param nodeTypes list of node types to filter nodes of process instance * @param queryContext control parameters for the result, such as sorting and paging * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getNodeInstancesByCorrelationKeyNodeType(CorrelationKey correlationKey, List<Integer> states, List<String> nodeTypes, QueryContext queryContext); /** * Returns a collection of all process variables and their current values for the given process instance * @param processInstanceId process instance ID * @return information about variables in the specified process instance, * represented by a list of {@link VariableDesc} instances */ Collection<VariableDesc> getVariablesCurrentState(long processInstanceId); /** * Returns a collection of changes to the given variable within the scope of a process instance * @param processInstanceId unique identifier of the process instance * @param variableId ID of the variable * @param queryContext control parameters for the result, such as sorting and paging * @return information about the variable with the given ID in the specified process instance, * represented by a list of {@link VariableDesc} instances */ Collection<VariableDesc> getVariableHistory(long processInstanceId, String variableId, QueryContext queryContext); // Process information /** * Returns a list of process definitions for the given deployment ID * @param deploymentId deployment ID of the runtime * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessDefinition} instances representing processes that match * the given criteria (deploymentId) */ Collection<ProcessDefinition> getProcessesByDeploymentId(String deploymentId, QueryContext queryContext); /** * Returns a list of process definitions that match the given filter * @param filter regular expression * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessDefinition} instances with a name or ID that matches the given regular expression */ Collection<ProcessDefinition> getProcessesByFilter(String filter, QueryContext queryContext); /** * Returns all process definitions available * @param queryContext control parameters for the result, such as sorting and paging * @return list of all available processes, in the form a of a list of {@link ProcessDefinition} instances */ Collection<ProcessDefinition> getProcesses(QueryContext queryContext); /** * Returns a list of process definition identifiers for the given deployment ID * @param deploymentId deployment ID of the runtime * @param queryContext control parameters for the result, such as sorting and paging * @return list of all available process id's for a particular deployment/runtime */ Collection<String> getProcessIds(String deploymentId, QueryContext queryContext); /** * Returns process definitions for the given process ID regardless of the deployment * @param processId ID of the process * @return collection of {@link ProcessDefinition} instances representing the {@link Process} * with the specified process ID */ Collection<ProcessDefinition> getProcessesById(String processId); /** * Returns the process definition for the given deployment and process identifiers * @param deploymentId ID of the deployment (runtime) * @param processId ID of the process * @return {@link ProcessDefinition} instance, representing the {@link Process} * that is present in the specified deployment with the specified process ID */ ProcessDefinition getProcessesByDeploymentIdProcessId(String deploymentId, String processId); // user task query operations /** * Return a task by its workItemId * @param workItemId * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskByWorkItemId(Long workItemId); /** * Return a task by its taskId * @param taskId * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskById(Long taskId); /** * Return a task by its taskId with SLA data if the withSLA param is true * @param taskId * @param withSLA * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskById(Long taskId, boolean withSLA); /** * Return a list of assigned tasks for a Business Administrator user. Business * administrators play the same role as task stakeholders but at task type * level. Therefore, business administrators can perform the exact same * operations as task stakeholders. Business administrators can also observe * the progress of notifications * * @param userId identifier of the Business Administrator user * @param filter filter for the list of assigned tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsBusinessAdministrator(String userId, QueryFilter filter); /** * Return a list of assigned tasks for a Business Administrator user for with one of the listed * statuses * @param userId identifier of the Business Administrator user * @param statuses the statuses of the tasks to return * @param filter filter for the list of assigned tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsBusinessAdministratorByStatus(String userId, List<Status> statuses, QueryFilter filter); /** * Return a list of tasks that a user is eligible to own * * @param userId identifier of the user * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, QueryFilter filter); /** * Return a list of tasks the user or user groups are eligible to own * * @param userId identifier of the user * @param groupIds a list of identifiers of the groups * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, QueryFilter filter); /** * Return a list of tasks the user is eligible to own and that are in one of the listed * statuses * * @param userId identifier of the user * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status> status, QueryFilter filter); /** * Return a list of tasks the user or groups are eligible to own and that are in one of the listed * statuses * @param userId identifier of the user * @param groupIds filter for the identifiers of the groups * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, List<Status> status, QueryFilter filter); /** * Return a list of tasks the user is eligible to own, that are in one of the listed * statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set * will also be included in the result set * * @param userId identifier of the user * @param status filter for the task statuses * @param from earliest expiration date for the tasks * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwnerByExpirationDateOptional(String userId, List<Status> status, Date from, QueryFilter filter); /** * Return a list of tasks the user has claimed, that are in one of the listed * statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set * will also be included in the result set * * @param userId identifier of the user * @param strStatuses filter for the task statuses * @param from earliest expiration date for the tasks * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwnedByExpirationDateOptional(String userId, List<Status> strStatuses, Date from, QueryFilter filter); /** * Return a list of tasks the user has claimed * * @param userId identifier of the user * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwned(String userId, QueryFilter filter); /** * Return a list of tasks the user has claimed with one of the listed * statuses * * @param userId identifier of the user * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwnedByStatus(String userId, List<Status> status, QueryFilter filter); /** * Get a list of tasks the Process Instance is waiting on * * @param processInstanceId identifier of the process instance * @return list of task identifiers */ List<Long> getTasksByProcessInstanceId(Long processInstanceId); /** * Get filter for the tasks the Process Instance is waiting on that are in one of the * listed statuses * * @param processInstanceId identifier of the process instance * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksByStatusByProcessInstanceId(Long processInstanceId, List<Status> status, QueryFilter filter); /** * Get a list of task audit logs for all tasks owned by the user, applying a query filter to the list of tasks * * * @param userId identifier of the user that owns the tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} task audit logs */ List<AuditTask> getAllAuditTask(String userId, QueryFilter filter); /** * Get a list of task audit logs for all tasks that are active and owned by the user, applying a query filter to the list of tasks * * @param userId identifier of the user that owns the tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllAuditTaskByStatus(String userId, QueryFilter filter); /** * Get a list of task audit logs for group tasks (actualOwner == null) for the user, applying a query filter to the list of tasks * * @param userId identifier of the user that is associated with the group tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllGroupAuditTask(String userId, QueryFilter filter); /** * Get a list of task audit logs for tasks that are assigned to a Business Administrator user, applying a query filter to the list of tasks * * @param userId identifier of the Business Administrator user * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllAdminAuditTask(String userId, QueryFilter filter); /** * Gets a list of task events for the given task * @param taskId identifier of the task * @param filter for the list of events * @return list of @{@link TaskEvent} task events */ List<TaskEvent> getTaskEvents(long taskId, QueryFilter filter); /** * Query on {@link TaskSummary} instances * @param userId the user associated with the tasks queried * @return {@link TaskSummaryQueryBuilder} used to create the query */ TaskSummaryQueryBuilder taskSummaryQuery(String userId); /** * Gets a list of {@link TaskSummary} instances for tasks that define a given variable * @param userId the ID of the user associated with the tasks * @param variableName the name of the task variable * @param statuses the list of statuses that the task can have * @param queryContext the query context * @return a {@link List} of {@link TaskSummary} instances */ List<TaskSummary> getTasksByVariable(String userId, String variableName, List<Status> statuses, QueryContext queryContext); /** * Gets a list of {@link TaskSummary} instances for tasks that define a given variable and the variable is set to the given value * @param userId the ID of the user associated with the tasks * @param variableName the name of the task variable * @param variableValue the value of the task variable * @param statuses the list of statuses that the task can have * @param context the query context * @return a {@link List} of {@link TaskSummary} instances */ List<TaskSummary> getTasksByVariableAndValue(String userId, String variableName, String variableValue, List<Status> statuses, QueryContext context); } 66.3.6. User Task Service The user task service covers the complete lifecycle of an individual task, and you can use the service to manage a user task from start to end. Task queries are not a part of the user task service. Use the runtime data service to query for tasks. Use the user task service for scoped operations on one task, including the following actions: Modification of selected properties Access to task variables Access to task attachments Access to task comments The user task service is also a command executor. You can use it to execute custom task commands. The following example shows starting a process and interacting with a task in the process: Starting a process and interacting with a user task in this process long processInstanceId = processService.startProcess(deployUnit.getIdentifier(), "org.jbpm.writedocument"); List<Long> taskIds = runtimeDataService.getTasksByProcessInstanceId(processInstanceId); Long taskId = taskIds.get(0); userTaskService.start(taskId, "john"); UserTaskInstanceDesc task = runtimeDataService.getTaskById(taskId); Map<String, Object> results = new HashMap<String, Object>(); results.put("Result", "some document data"); userTaskService.complete(taskId, "john", results); 66.3.7. Quartz-based timer service The process engine provides a cluster-ready timer service using Quartz. You can use the service to dispose or load your KIE session at any time. The service can manage how long a KIE session is active in order to fire each timer appropriately. The following example shows a basic Quartz configuration file for a clustered environment: Quartz configuration file for a clustered environment #============================================================================ # Configure Main Scheduler Properties #============================================================================ org.quartz.scheduler.instanceName = jBPMClusteredScheduler org.quartz.scheduler.instanceId = AUTO #============================================================================ # Configure ThreadPool #============================================================================ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 org.quartz.threadPool.threadPriority = 5 #============================================================================ # Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate org.quartz.jobStore.useProperties=false org.quartz.jobStore.dataSource=managedDS org.quartz.jobStore.nonManagedTXDataSource=nonManagedDS org.quartz.jobStore.tablePrefix=QRTZ_ org.quartz.jobStore.isClustered=true org.quartz.jobStore.clusterCheckinInterval = 20000 #========================================================================= # Configure Datasources #========================================================================= org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psbpmsDS org.quartz.dataSource.nonManagedDS.jndiURL=jboss/datasources/quartzNonManagedDS You must modify the example to fit your environment. 66.3.8. Query service The query service provides advanced search capabilities that are based on Dashbuilder data sets. With this approach, you can control how to retrieve data from underlying data store. You can use complex JOIN statements with external tables such as JPA entities tables or custom systems database tables. The query service is built around the following two sets of operations: Management operations: Register a query definition Replace a query definition Unregister (remove) a query definition Get a query definition Get all registered query definitions Runtime operations: Simple query based on QueryParam as the filter provider Advanced query based on QueryParamBuilder as the filter provider Dashbuilder data sets provide support for multiple data sources, such as CSV, SQL, and Elastic Search. However, the process engine uses a RDBMS-based backend and focuses on SQL-based data sets. Therefore, the process engine query service is a subset of Dashbuilder data set capabilities that enables efficient queries with a simple API. 66.3.8.1. Key classes of the query service The query service relies on the following key classes: QueryDefinition : Represents the definition of a data set. The definition consists of a unique name, an SQL expression (the query) and the source , the JNDI name of the data source to use when performing queries. QueryParam : The basic structure that represents an individual query parameter or condition. This structure consists of the column name, operator, and expected values. QueryResultMapper : The class that maps raw dataset data (rows and columns) to an object representation. QueryParamBuilder : The class that builds query filters that are applied to the query definition to invoke the query. QueryResultMapper QueryResultMapper maps data taken from a database (dataset) to an object representation. It is similar to ORM providers such as hibernate , which map tables to entities. Many object types can be used for representing dataset results. Therefore, existing mappers might not always suit your needs. Mappers in QueryResultMapper are pluggable and you can provide your own mapper when necessary, in order to transform dataset data into any type you need. The process engine supplies the following mappers: org.jbpm.kie.services.impl.query.mapper.ProcessInstanceQueryMapper , registered with the name ProcessInstances org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithVarsQueryMapper , registered with the name ProcessInstancesWithVariables org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithCustomVarsQueryMapper , registered with the name ProcessInstancesWithCustomVariables org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceQueryMapper , registered with the name UserTasks org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithVarsQueryMapper , registered with the name UserTasksWithVariables org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithCustomVarsQueryMapper , registered with name UserTasksWithCustomVariables org.jbpm.kie.services.impl.query.mapper.TaskSummaryQueryMapper , registered with the name TaskSummaries org.jbpm.kie.services.impl.query.mapper.RawListQueryMapper , registered with the name RawList Each QueryResultMapper is registered with a unique string name. You can look up mappers by this name instead of referencing the full class name. This feature is especially important when using EJB remote invocation of services, because it avoids relying on a particular implementation on the client side. To reference a QueryResultMapper by the string name, use NamedQueryMapper , which is a part of the jbpm-services-api module. This class acts as a delegate (lazy delegate) and looks up the actual mapper when the query is performed. Using NamedQueryMapper queryService.query("my query def", new NamedQueryMapper<Collection<ProcessInstanceDesc>>("ProcessInstances"), new QueryContext()); QueryParamBuilder QueryParamBuilder provides an advanced way of building filters for data sets. By default, when you use a query method of QueryService that accepts zero or more QueryParam instances, all of these parameters are joined with an AND operator, so a data entry must match all of them. However, sometimes more complicated relationships between parameters are required. You can use QueryParamBuilder to build custom builders that provide filters at the time the query is issued. One existing implementation of QueryParamBuilder is available in the process engine. It covers default QueryParams that are based on the core functions . These core functions are SQL-based conditions, including the following conditions: IS_NULL NOT_NULL EQUALS_TO NOT_EQUALS_TO LIKE_TO GREATER_THAN GREATER_OR_EQUALS_TO LOWER_THAN LOWER_OR_EQUALS_TO BETWEEN IN NOT_IN Before invoking a query, the process engine invokes the build method of the QueryParamBuilder interface as many times as necessary while the method returns a non-null value. Because of this approach, you can build up complex filter options that could not be expressed by a simple list of QueryParams . The following example shows a basic implementation of QueryParamBuilder . It relies on the DashBuilder Dataset API. Basic implementation of QueryParamBuilder public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> { private Map<String, Object> parameters; private boolean built = false; public TestQueryParamBuilder(Map<String, Object> parameters) { this.parameters = parameters; } @Override public ColumnFilter build() { // return null if it was already invoked if (built) { return null; } String columnName = "processInstanceId"; ColumnFilter filter = FilterFactory.OR( FilterFactory.greaterOrEqualsTo((Long)parameters.get("min")), FilterFactory.lowerOrEqualsTo((Long)parameters.get("max"))); filter.setColumnId(columnName); built = true; return filter; } } After implementing the builder, you can use an instance of this class when performing a query with the QueryService service, as shown in the following example: Running a query with the QueryService service queryService.query("my query def", ProcessInstanceQueryMapper.get(), new QueryContext(), paramBuilder); 66.3.8.2. Using the query service in a typical scenario The following procedure outlines the typical way in which your code might use the query service. Procedure Define the data set, which is a view of the data you want to use. Use the QueryDefinition class in the services API to complete this operation: Defining the data set SqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstances", "java:jboss/datasources/ExampleDS"); query.setExpression("select * from processinstancelog"); This example represents the simplest possible query definition. The constructor requires the following parameters: A unique name that identifies the query at run time A JNDI data source name to use for performing queries with this definition The parameter of the setExpression() method is the SQL statement that builds up the data set view. Queries in the query service use data from this view and filter this data as necessary. Register the query: Registering a query queryService.registerQuery(query); If required, collect all the data from the dataset, without any filtering: Collecting all the data from the dataset Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext()); This simple query uses defaults from QueryContext for paging and sorting. If required, use a QueryContext object that changes the defaults of the paging and sorting: Changing defaults using a QueryContext object QueryContext ctx = new QueryContext(0, 100, "start_date", true); Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), ctx); If required, use the query to filter data: Using a query to filter data // single filter param Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%")); // multiple filter params (AND) Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"), QueryParam.in(COLUMN_STATUS, 1, 3)); With the query service, you can define what data to fetch and how to filter it. Limitation of the JPA provider or other similar limitations do not apply. You can tailor database queries to your environment to increase performance. 66.3.9. Advanced query service The advanced query service provides capabilities to search for processes and tasks, based on process and task attributes, process variables, and internal variables of user tasks. The search automatically covers all existing processes in the process engine. The names and required values of attributes and variables are defined in QueryParam objects. Process attributes include process instance ID, correlation key, process definition ID, and deployment ID. Task attributes include task name, owner, and status. The following search methods are available: queryProcessByVariables : Search for process instances based on a list of process attributes and process variable values. To be included in the result, a process instance must have the listed attributes and the listed values in its process variables. queryProcessByVariablesAndTask : Search for process instances based on a list of process attributes, process variable values, and task variable values. To be included in the result, a process instance must have the listed attributes and the listed values in its process variables. It also must include a task with the listed values in its task variables. queryUserTasksByVariables : Search for user tasks based on a list of task attributes, task variable values, and process variable values. To be included in the result, a task must have the listed attributes and listed values in its task variables. It also must be included in a process with the listed values in its process variables. The service is provided by the AdvanceRuntimeDataService class. The interface for this class also defines predefined task and process attribute names. Definition of the AdvanceRuntimeDataService interface public interface AdvanceRuntimeDataService { String TASK_ATTR_NAME = "TASK_NAME"; String TASK_ATTR_OWNER = "TASK_OWNER"; String TASK_ATTR_STATUS = "TASK_STATUS"; String PROCESS_ATTR_INSTANCE_ID = "PROCESS_INSTANCE_ID"; String PROCESS_ATTR_CORRELATION_KEY = "PROCESS_CORRELATION_KEY"; String PROCESS_ATTR_DEFINITION_ID = "PROCESS_DEFINITION_ID"; String PROCESS_ATTR_DEPLOYMENT_ID = "PROCESS_DEPLOYMENT_ID"; String PROCESS_COLLECTION_VARIABLES = "ATTR_COLLECTION_VARIABLES"; List<ProcessInstanceWithVarsDesc> queryProcessByVariables(List<QueryParam> attributes, List<QueryParam> processVariables, QueryContext queryContext); List<ProcessInstanceWithVarsDesc> queryProcessByVariablesAndTask(List<QueryParam> attributes, List<QueryParam> processVariables, List<QueryParam> taskVariables, List<String> potentialOwners, QueryContext queryContext); List<UserTaskInstanceWithPotOwnerDesc> queryUserTasksByVariables(List<QueryParam> attributes, List<QueryParam> taskVariables, List<QueryParam> processVariables, List<String> potentialOwners, QueryContext queryContext); } 66.3.10. Process instance migration service The process instance migration service is a utility for migrating process instances from one deployment to another. Process or task variables are not affected by the migration. However, the new deployment can use a different process definition. When migrating a process, the process instance migration service also automatically migrates all the subprocesses of the process, the subprocesses of those subprocesses, and so on. If you attempt to migrate a subprocess without migrating the parent process, the migration fails. For the simplest approach to process migration, let active process instances finish and start new process instances in the new deployment. If this approach is not suitable for your needs, consider the following issues before starting process instance migration: Backward compatibility Data change Need for node mapping Whenever possible, create backward-compatible processes by extending process definitions. For example, removing nodes from the process definition breaks compatibility. If you make such changes, you must provide node mapping. Process instance migration uses node mapping if an active process instance is in a node that has been removed. A node map contains source node IDs from the old process definition mapped to target node IDs in the new process definition. You can map nodes of the same type only, such as a user task to a user task. Red Hat Process Automation Manager offers several implementations of the migration service: Methods in the ProcessInstanceMigrationService interface that implement the migration service public interface ProcessInstanceMigrationService { /** * Migrates a given process instance that belongs to the source deployment into the target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instance to be migrated belongs * @param processInstanceId ID of the process instance to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instance should be migrated * @return returns complete migration report */ MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId); /** * Migrates a given process instance (with node mapping) that belongs to source deployment into the target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instance to be migrated belongs * @param processInstanceId ID of the process instance to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instance should be migrated * @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes * @return returns complete migration report */ MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping); /** * Migrates given process instances that belong to the source deployment into a target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instances to be migrated belong * @param processInstanceIds list of process instance IDs to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instances should be migrated * @return returns complete migration report */ List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId); /** * Migrates given process instances (with node mapping) that belong to the source deployment into a target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instances to be migrated belong * @param processInstanceIds list of process instance ID to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instances should be migrated * @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes * @return returns list of migration reports one per each process instance */ List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping); } To migrate process instances on a KIE Server, use the following implementations. These methods are similar to the methods in the ProcessInstanceMigrationService interface, providing the same migration implementations for KIE Server deployments. Methods in the ProcessAdminServicesClient interface that implement the migration service for KIE Server deployments public interface ProcessAdminServicesClient { MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId); MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping); List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId); List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping); } You can migrate a single process instance or multiple process instances at once. If you migrate multiple process instances, each instance is migrated in a separate transaction to ensure that the migrations do not affect each other. After migration is completed, the migrate method returns a MigrationReport object that contains the following information: The start and end dates of the migration. The migration outcome (success or failure). A log entry of the INFO , WARN , or ERROR type. The ERROR message terminates the migration. The following example shows a process instance migration: Migrating a process instance in a KIE Server deployment import org.kie.server.api.model.admin.MigrationReportInstance; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; public class ProcessInstanceMigrationTest{ private static final String SOURCE_CONTAINER = "com.redhat:MigrateMe:1.0"; private static final String SOURCE_PROCESS_ID = "MigrateMe.MigrateMev1"; private static final String TARGET_CONTAINER = "com.redhat:MigrateMe:2"; private static final String TARGET_PROCESS_ID = "MigrateMe.MigrateMeV2"; public static void main(String[] args) { KieServicesConfiguration config = KieServicesFactory.newRestConfiguration("http://HOST:PORT/kie-server/services/rest/server", "USERNAME", "PASSWORD"); config.setMarshallingFormat(MarshallingFormat.JSON); KieServicesClient client = KieServicesFactory.newKieServicesClient(config); long sourcePid = client.getProcessClient().startProcess(SOURCE_CONTAINER, SOURCE_PROCESS_ID); // Use the 'report' object to return migration results. MigrationReportInstance report = client.getAdminClient().migrateProcessInstance(SOURCE_CONTAINER, sourcePid,TARGET_CONTAINER, TARGET_PROCESS_ID); System.out.println("Was migration successful:" + report.isSuccessful()); client.getProcessClient().abortProcessInstance(TARGET_CONTAINER, sourcePid); } } Known limitations of process instance migration The following situations can cause a failure of the migration or incorrect migration: A new or modified task requires inputs that are not available in the migrated process instance. You modify the tasks prior to the active task where the changes have an impact on further processing. You remove a human task that is currently active. To replace a human task, you must map it to another human task. You add a new task parallel to the single active task. As all branches in an AND gateway are not activated, the process gets stuck. You remove active timer events (these events are not changed in the database). You fix or update inputs and outputs in an active task (the task data is not migrated). If you apply mapping to a task node, only the task node name and description are mapped. Other task fields, including the TaskName variable, are not mapped to the new task. 66.3.11. Deployments and different process versions The deployment service puts business assets into an execution environment. However, in some cases additional management is required to make the assets available in the correct context. Notably, if you deploy several versions of the same process, you must ensure that process instances use the correct version. Activation and Deactivation of deployments In some cases, a number of process instances are running on a deployment, and then you add a new version of the same process to the runtime environment. You might decide that new instances of this process definition must use the new version while the existing active instances should continue with the version. To enable this scenario, use the following methods of the deployment service: activate : Activates a deployment so it can be available for interaction. You can list its process definitions and start new process instances for this deployment. deactivate : Deactivates a deployment. Disables the option to list process definitions and to start new process instances of processes in the deployment. However, you can continue working with the process instances that are already active, for example, signal events and interact with user tasks. You can use this feature for smooth transition between project versions without the need for process instance migration. Invocation of the latest version of a process If you need to use the latest version of the project's process, you can use the latest keyword to interact with several operations in services. This approach is supported only when the process identifier remains the same in all versions. The following example explains the feature. The initial deployment unit is org.jbpm:HR:1.0 . It contains the first version of a hiring process. After several weeks, you develop a new version and deploy it to the execution server as org.jbpm:HR.2.0 . It includes version 2 of the hiring process. If you want to call the process and ensure that you use the latest version, you can use the following deployment ID: If you use this deployment ID, the process engine finds the latest available version of the project. It uses the following identifiers: groupId : org.jbpm artifactId : HR The version numbers are compared by Maven rules to find the latest version. The following code example shows deployment of multiple versions and interacting with the latest version: Deploying multiple versions of a process and interacting with the latest version KModuleDeploymentUnit deploymentUnitV1 = new KModuleDeploymentUnit("org.jbpm", "HR", "1.0"); deploymentService.deploy(deploymentUnitV1); long processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask"); ProcessInstanceDesc piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); // We have started a process with the project version 1 assertEquals(deploymentUnitV1.getIdentifier(), piDesc.getDeploymentId()); // we deploy version 2 KModuleDeploymentUnit deploymentUnitV2 = new KModuleDeploymentUnit("org.jbpm", "HR", "2.0"); deploymentService.deploy(deploymentUnitV2); processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask"); piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); // This time we have started a process with the project version 2 assertEquals(deploymentUnitV2.getIdentifier(), piDesc.getDeploymentId()); Note This feature is also available in the KIE Server REST API. When sending a request with a deployment ID, you can use LATEST as the version identifier. Additional resources Interacting with Red Hat Process Automation Manager using KIE APIs 66.3.12. Deployment synchronization Process engine services include a deployment synchronizer that stores available deployments into a database, including the deployment descriptor for every deployment. The synchronizer also monitors this table to keep it in sync with other installations that might be using the same data source. This functionality is especially important when running in a cluster or when Business Central and a custom application must operate on the same artifacts. By default, when running core services, you must configure synchronization. For EJB and CDI extensions, synchronization is enabled automatically. The following code sample configures synchronization: Configuring synchronization TransactionalCommandService commandService = new TransactionalCommandService(emf); DeploymentStore store = new DeploymentStore(); store.setCommandService(commandService); DeploymentSynchronizer sync = new DeploymentSynchronizer(); sync.setDeploymentService(deploymentService); sync.setDeploymentStore(store); DeploymentSyncInvoker invoker = new DeploymentSyncInvoker(sync, 2L, 3L, TimeUnit.SECONDS); invoker.start(); .... invoker.stop(); With this configuration, deployments are synchronized every three seconds with an initial delay of two seconds. 66.4. Threads in the process engine We can refer to two types of multi-threading: logical and technical . Technical multi-threading involves multiple threads or processes that are started, for example, by a Java or C program. Logical multi-threading happens in a BPM process, for example, after the process reaches a parallel gateway. In execution logic, the original process splits into two processes that run in a parallel fashion. Process engine code implements logical multi-threading using one technical thread. The reason for this design choice is that multiple (technical) threads must be able to communicate state information to each other if they are working on the same process. This requirement brings a number of complications. The extra logic required for safe communication between threads, as well as the extra overhead required to avoid race conditions and deadlocks, can negate any performance benefit of using such threads. In general, the process engine executes actions in series. For example, when the process engine encounters a script task in a process, it executes the script synchronously and waits for it to complete before continuing execution. In the same way, if a process encounters a parallel gateway, the process engine sequentially triggers each of the outgoing branches, one after the other. This is possible because execution is almost always instantaneous, meaning that it is extremely fast and produces almost no overhead. As a result, sequential execution does not create any effects that a user can notice. Any code in a process that you supply is also executed synchronously and the process engine waits for it to finish before continuing the process. For example, if you use a Thread.sleep(... ) as part of a custom script, the process engine thread is blocked during the sleep period. When a process reaches a service task, the process engine also invokes the handler for the task synchronously and waits for the completeWorkItem(... ) method to return before continuing execution. If your service handler is not instantaneous, implement the asynchronous execution independently in your code. For example, your service task might invoke an external service. The delay in invoking this service remotely and waiting for the results might be significant. Therefore, invoke this service asynchronously. Your handler must only invoke the service and then return from the method, then notify the process engine later when the results are available. In the meantime, the process engine can continue execution of the process. Human tasks are a typical example of a service that needs to be invoked asynchronously. A human task requires a human actor to respond to a request, and the process engine must not wait for this response. When a human task node is triggered, the human task handler only creates a new task on the task list of the assigned actor. The process engine is then able to continue execution on the rest of the process, if necessary. The handler notifies the process engine asynchronously when the user has completed the task. 66.5. Execution errors in the process engine Any part of process engine execution, including the task service, can throw an exception. An exception can be any class that extends java.lang.Throwable . Some exceptions are handled at the process level. Notably, a work item handler can throw a custom exception that specifies a subprocess for error handling. For information about developing work item handlers, see Custom tasks and work item handlers . If an exception is not handled and reaches the process engine, it becomes an execution error . When an execution error happens, the process engine rolls back the current transaction and leaves the process in the stable state. After that, the process engine continues the execution of the process from that point. Execution errors are visible to the caller that sent the request to the process engine. The process engine also includes an extendable mechanism for handling execution errors and storing information about them. This mechanism consists of the following components: ExecutionErrorManager : The entry point for error handling. This class is integrated with RuntimeManager , which is responsible for providing it to the underlying KieSession and TaskService . ExecutionErrorManager provides access to other classes in the execution error handling mechanism. When the process engine creates a RuntimeManager instance, it also creates a corresponding ExecutionErrorManager instance. ExecutionErrorHandler : The primary class for error handling. This class is implemented in the process engine and you normally do not need to customize or extend it directly. ExecutionErrorHandler calls error filters to process particular errors and calls ExecutionErrorStorage to store error information. The ExecutionErrorHandler is bound to the life cycle of RuntimeEngine ; it is created when a new runtime engine is created and is destroyed when RuntimeEngine is disposed. A single instance of the ExecutionErrorHandler is used within a given execution context or transaction. Both KieSession and TaskService use that instance to inform the error handling about processed nodes or tasks. ExecutionErrorHandler is informed about the following events: Starting of processing of a node instance Completion of processing of a node instance Starting of processing of a task instance Completion of processing of a task instance The ExecutionErrorHandler uses this information to record the context for errors, especially if the error itself does not provide process context information. For example, database exceptions do not carry any process information. ExecutionErrorStorage : The pluggable storage class for execution error information. When the process engine creates a RuntimeManager instance, it also creates a corresponding ExecutionErrorStorage instance. Then the ExecutionErrorHandler class calls this ExecutionErrorStorage instance to store information abiout every execution error. The default storage implementation uses a database table to store all the available information for every error. Different detail levels might be available for different error types, as some errors might not permit extraction of detailed information. A number of filters that process particular types of execution errors. You can add custom filters. By default, every execution error is recorded as unacknowledged . You can use Business Central to view all recorded execution errors and to acknowledge them. You can also create jobs that automatically acknowledge all or some execution errors. For information about using Business Central to view execution errors and to create jobs that acknowledge the errors automatically, see Managing and monitoring business processes in Business Central . 66.5.1. Execution error types and filters Execution error handling attempts to catch and handle any kind of error. However, users might need to handle different errors in different ways. Also, different detailed information is available for different types of errors. The error handling mechanism supports pluggable filters . Every filter processes a particular type of error. You can add filters that process specific errors in different ways, overriding default processing. A filter is an implementation of the ExecutionErrorFilter interface. This interface builds instances of ExecutionError , which are later stored using the ExecutionErrorStorage class. The ExecutionErrorFilter interface has the following methods: accept : Indicates if an error can be processed by the filter filter : Processes an error and returns the ExecutionError instance getPriority : Indicates the priority for this filter The execution error handler processes each error separately. For each error, it starts calling the accept method of all registered filters, starting with the filters that have a lower priority value. If the accept method of a filter returns true , the handler calls the filter method of the filter and does not call any other filters. Because of the priority system, only one filter processes any error. More specialized filters have lower priority values. An error that is not accepted by any specialized filters reaches generic filters that have higher priority values. The ServiceLoader mechanism provides ExecutionErrorFilter instances. To register custom filters, add their fully qualified class names to the META-INF/services/org.kie.internal.runtime.error.ExecutionErrorFilter file of your service project. Red Hat Process Automation Manager ships with the following execution error filters: Table 66.1. ExecutionErrorFilters Class name Type Priority org.jbpm.runtime.manager.impl.error.filters.ProcessExecutionErrorFilter Process 100 org.jbpm.runtime.manager.impl.error.filters.TaskExecutionErrorFilter Task 80 org.jbpm.runtime.manager.impl.error.filters.DBExecutionErrorFilter DB 200 org.jbpm.executor.impl.error.JobExecutionErrorFilter Job 100 Filters are given a higher execution order based on the lowest value of the priority. Therefore, the execution error handler invokes these filters in the following order: Task Process Job DB 66.6. Event listeners in the process engine Every time that a process or task changes to a different point in its lifecycle, the process engine generates an event. You can develop a class that receives and processes such events. This class is called an event listener . The process engine passes an event object to this class. The object provides access to related information. For example, if the event is related to a process node, the object provides access to the process instance and the node instance. 66.6.1. Interfaces for event listeners You can use the following interfaces to develop event listeners for the process engine. 66.6.1.1. Interfaces for process event listeners You can develop a class that implements the ProcessEventListener interface. This class can listen to process-related events, such as starting or completing a process or entering and leaving a node. The following source code shows the different methods of the ProcessEventListener interface: The ProcessEventListener interface public interface ProcessEventListener extends EventListener { /** * This listener method is invoked right before a process instance is being started. * @param event */ void beforeProcessStarted(ProcessStartedEvent event); /** * This listener method is invoked right after a process instance has been started. * @param event */ void afterProcessStarted(ProcessStartedEvent event); /** * This listener method is invoked right before a process instance is being completed (or aborted). * @param event */ void beforeProcessCompleted(ProcessCompletedEvent event); /** * This listener method is invoked right after a process instance has been completed (or aborted). * @param event */ void afterProcessCompleted(ProcessCompletedEvent event); /** * This listener method is invoked right before a node in a process instance is being triggered * (which is when the node is being entered, for example when an incoming connection triggers it). * @param event */ void beforeNodeTriggered(ProcessNodeTriggeredEvent event); /** * This listener method is invoked right after a node in a process instance has been triggered * (which is when the node was entered, for example when an incoming connection triggered it). * @param event */ void afterNodeTriggered(ProcessNodeTriggeredEvent event); /** * This listener method is invoked right before a node in a process instance is being left * (which is when the node is completed, for example when it has performed the task it was * designed for). * @param event */ void beforeNodeLeft(ProcessNodeLeftEvent event); /** * This listener method is invoked right after a node in a process instance has been left * (which is when the node was completed, for example when it performed the task it was * designed for). * @param event */ void afterNodeLeft(ProcessNodeLeftEvent event); /** * This listener method is invoked right before the value of a process variable is being changed. * @param event */ void beforeVariableChanged(ProcessVariableChangedEvent event); /** * This listener method is invoked right after the value of a process variable has been changed. * @param event */ void afterVariableChanged(ProcessVariableChangedEvent event); /** * This listener method is invoked right before a process/node instance's SLA has been violated. * @param event */ default void beforeSLAViolated(SLAViolatedEvent event) {} /** * This listener method is invoked right after a process/node instance's SLA has been violated. * @param event */ default void afterSLAViolated(SLAViolatedEvent event) {} /** * This listener method is invoked when a signal is sent * @param event */ default void onSignal(SignalEvent event) {} /** * This listener method is invoked when a message is sent * @param event */ default void onMessage(MessageEvent event) {} } You can implement any of these methods to process the corresponding event. For the definition of the event classes that the process engine passes to the methods, see the org.kie.api.event.process package in the Java documentation . You can use the methods of the event class to retrieve other classes that contain all information about the entities involved in the event. The following example is a part of a node-related event, such as afterNodeLeft() , and retrieves the process instance and node type. Retrieving the process instance and node type in a node-related event WorkflowProcessInstance processInstance = event.getNodeInstance().getProcessInstance() NodeType nodeType = event.getNodeInstance().getNode().getNodeType() 66.6.1.2. Interfaces for task lifecycle event listeners You can develop a class that implements the TaskLifecycleEventListener interface. This class can listen to events related to the lifecycle of tasks, such as assignment of an owner or completion of a task. The following source code shows the different methods of the TaskLifecycleEventListener interface: The TaskLifecycleEventListener interface public interface TaskLifeCycleEventListener extends EventListener { public enum AssignmentType { POT_OWNER, EXCL_OWNER, ADMIN; } public void beforeTaskActivatedEvent(TaskEvent event); public void beforeTaskClaimedEvent(TaskEvent event); public void beforeTaskSkippedEvent(TaskEvent event); public void beforeTaskStartedEvent(TaskEvent event); public void beforeTaskStoppedEvent(TaskEvent event); public void beforeTaskCompletedEvent(TaskEvent event); public void beforeTaskFailedEvent(TaskEvent event); public void beforeTaskAddedEvent(TaskEvent event); public void beforeTaskExitedEvent(TaskEvent event); public void beforeTaskReleasedEvent(TaskEvent event); public void beforeTaskResumedEvent(TaskEvent event); public void beforeTaskSuspendedEvent(TaskEvent event); public void beforeTaskForwardedEvent(TaskEvent event); public void beforeTaskDelegatedEvent(TaskEvent event); public void beforeTaskNominatedEvent(TaskEvent event); public default void beforeTaskUpdatedEvent(TaskEvent event){}; public default void beforeTaskReassignedEvent(TaskEvent event){}; public default void beforeTaskNotificationEvent(TaskEvent event){}; public default void beforeTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void beforeTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void beforeTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public default void beforeTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public void afterTaskActivatedEvent(TaskEvent event); public void afterTaskClaimedEvent(TaskEvent event); public void afterTaskSkippedEvent(TaskEvent event); public void afterTaskStartedEvent(TaskEvent event); public void afterTaskStoppedEvent(TaskEvent event); public void afterTaskCompletedEvent(TaskEvent event); public void afterTaskFailedEvent(TaskEvent event); public void afterTaskAddedEvent(TaskEvent event); public void afterTaskExitedEvent(TaskEvent event); public void afterTaskReleasedEvent(TaskEvent event); public void afterTaskResumedEvent(TaskEvent event); public void afterTaskSuspendedEvent(TaskEvent event); public void afterTaskForwardedEvent(TaskEvent event); public void afterTaskDelegatedEvent(TaskEvent event); public void afterTaskNominatedEvent(TaskEvent event); public default void afterTaskReassignedEvent(TaskEvent event){}; public default void afterTaskUpdatedEvent(TaskEvent event){}; public default void afterTaskNotificationEvent(TaskEvent event){}; public default void afterTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void afterTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void afterTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public default void afterTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; } You can implement any of these methods to process the corresponding event. For the definition of the event class that the process engine passes to the methods, see the org.kie.api.task package in the Java documentation . You can use the methods of the event class to retrieve the classes representing the task, task context, and task metadata. 66.6.2. Timing of calls to event listeners A number of event listener calls are before and after events, for example, beforeNodeLeft() and afterNodeLeft() , beforeTaskActivatedEvent() and afterTaskActivatedEvent() . The before and after event calls typically act like a stack. If event A directly causes event B, the following sequence of calls happens: Before A Before B After B After A For example, if leaving node X triggers node Y, all event calls related to triggering node Y occur between the beforeNodeLeft and afterNodeLeft calls for node X. In the same way, if starting a process directly causes some nodes to start, all nodeTriggered and nodeLeft event calls occur between the beforeProcessStarted and afterProcessStarted calls. This approach reflects cause and effect relationships between events. However, the timing and order of after event calls are not always intuitive. For example, an afterProcessStarted call can happen after the afterNodeLeft calls for some nodes in the process. In general, to be notified when a particular event occurs, use the before call for the event. Use an after call only if you want to make sure that all processing related to this event has ended, for example, when you want to be notified when all steps associated with starting a particular process instance have been completed. Depending on the type of node, some nodes might only generate nodeLeft calls and others might only generate nodeTriggered calls. For example, catch intermediate event nodes do not generate nodeTriggered calls because they are not triggered by another process node. Similarly, throw intermediate event nodes do not generate nodeLeft calls because these nodes do not have an outgoing connection to another node. 66.6.3. Practices for development of event listeners The process engine calls event listeners during processing of events or tasks. The calls happen within process engine transactions and block execution. Therefore, the event listener can affect the logic and performance of the process engine. To ensure minimal disruption, follow the following guidelines: Any action must be as short as possible. A listener class must not have a state. The process engine can destroy and re-create a listener class at any time. If the listener modifies any resource that exists outside the scope of the listener method, ensure that the resource is enlisted in the current transaction. The transaction might be rolled back. In this case, if the modified resource is not a part of the transaction, the state of the resource becomes inconsistent. Database-related resources provided by Red Hat JBoss EAP are always enlisted in the current transaction. In other cases, check the JTA information for the runtime environment that you are using. Do not use logic that relies on the order of execution of different event listeners. Do not include interactions with different entities outside the process engine within a listener. For example, do not include REST calls for notification of events. Instead, use process nodes to complete such calls. An exception is the output of logging information; however, a logging listener must be as simple as possible. You can use a listener to modify the state of the process or task that is involved in the event, for example, to change its variables. You can use a listener to interact with the process engine, for example, to send signals or to interact with process instances that are not involved in the event. 66.6.4. Registration of event listeners The KieSession class implements the RuleRuntimeEventManager interface that provides methods for registering, removing, and listing event listeners, as shown in the following list. Methods of the RuleRuntimeEventManager interface void addEventListener(AgendaEventListener listener); void addEventListener(RuleRuntimeEventListener listener); void removeEventListener(AgendaEventListener listener); void removeEventListener(RuleRuntimeEventListener listener); Collection<AgendaEventListener> getAgendaEventListeners(); Collection<RuleRuntimeEventListener> getRuleRintimeEventListeners(); However, in a typical case, do not use these methods. If you are using the RuntimeManager interface, you can use the RuntimeEnvironment class to register event listeners. If you are using the Services API, you can add fully qualified class names of event listeners to the META-INF/services/org.jbpm.services.task.deadlines.NotificationListener file in your project. The Services API also registers some default listeners, including org.jbpm.services.task.deadlines.notifications.impl.email.EmailNotificationListener , which can send email notifications for events. To exclude a default listener, you can add the fully qualified name of the listener to the org.kie.jbpm.notification_listeners.exclude JVM system property. 66.6.5. KieRuntimeLogger event listener The KieServices package contains the KieRuntimeLogger event listener that you can add to your KIE session. You can use this listener to create an audit log. This log contains all the different events that occurred at runtime. Note These loggers are intended for debugging purposes. They might be too detailed for business-level process analysis. The listener implements the following logger types: Console logger: This logger writes out all the events to the console. The fully qualified class name for this logger is org.drools.core.audit.WorkingMemoryConsoleLogger . File logger: This logger writes out all the events to a file using an XML representation. You can use the log file in an IDE to generate a tree-based visualization of the events that occurred during execution. The fully qualified class name for this logger is org.drools.core.audit.WorkingMemoryFileLogger . The file logger writes the events to disk only when closing the logger or when the number of events in the logger reaches a predefined level. Therefore, it is not suitable for debugging processes at runtime. Threaded file logger: This logger writes the events to a file after a specified time interval. You can use this logger to visualize the progress in real time while debugging processes. The fully qualified class name for this logger is org.drools.core.audit.ThreadedWorkingMemoryFileLogger . When creating a logger, you must pass the KIE session as an argument. The file loggers also require the name of the log file to be created. The threaded file logger requires the interval in milliseconds after which the events are saved. Always close the logger at the end of your application. The following example shows the use of the file logger. Using the file logger import org.kie.api.KieServices; import org.kie.api.logger.KieRuntimeLogger; ... KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "test"); // add invocations to the process engine here, // e.g. ksession.startProcess(processId); ... logger.close(); The log file that is created by the file-based loggers contains an XML-based overview of all the events that occurred during the runtime of the process. 66.7. Process engine configuration You can use several control parameters available to alter the process engine default behavior to suit the requirements of your environment. Set these parameters as JVM system properties, usually with the -D option when starting a program such as an application server. Table 66.2. Control parameters Name Possible values Default value Description jbpm.ut.jndi.lookup String Alternative JNDI name to be used when there is no access to the default name ( java:comp/UserTransaction ). NOTE: The name must be valid for the given runtime environment. Do not use this variable if there is no access to the default user transaction JNDI name. jbpm.enable.multi.con true | false false Enable multiple incoming and outgoing sequence flows support for activities jbpm.business.calendar.properties String / jbpm.business.calendar.properties Alternative class path location of the business calendar configuration file jbpm.overdue.timer.delay Long 2000 Specifies the delay for overdue timers to allow proper initialization, in milliseconds jbpm.process.name.comparator String Alternative comparator class to enable starting a process by name, by default the NumberVersionComparator comparator is used jbpm.loop.level.disabled true | false true Enable or disable loop iteration tracking for advanced loop support when using XOR gateways org.kie.mail.session String mail / jbpmMailSession Alternative JNDI name for the mail session used by Task Deadlines jbpm.usergroup.callback.properties String / jbpm.usergroup.callback.properties Alternative class path location for a user group callback implementation (LDAP, DB) jbpm.user.group.mapping String USD{jboss.server.config.dir}/roles.properties Alternative location of the roles.properties file for JBossUserGroupCallbackImpl jbpm.user.info.properties String / jbpm.user.info.properties Alternative class path location of the user info configuration (used by LDAPUserInfoImpl ) org.jbpm.ht.user.separator String , Alternative separator of actors and groups for user tasks org.quartz.properties String Location of the Quartz configuration file to activate the Quartz-based timer service jbpm.data.dir String USD{jboss.server.data.dir} if available, otherwise USD{java.io.tmpdir} Location to store data files produced by the process engine org.kie.executor.pool.size Integer 1 Thread pool size for the process engine executor org.kie.executor.retry.count Integer 3 Number of retries attempted by the process engine executor in case of an error org.kie.executor.interval Integer 0 Frequency used to check for pending jobs by the process engine executor, in seconds. If the value is 0 , the check is run once, during the startup of the executor. org.kie.executor.disabled true | false true Disable the process engine executor org.kie.store.services.class String org.drools.persistence.jpa.KnowledgeStoreServiceImpl Fully qualified name of the class that implements KieStoreServices that is responsible for bootstrapping KieSession instances org.kie.jbpm.notification_listeners.exclude String Fully qualified names of event listeners that must be excluded even if they would otherwise be used. Separate multiple names with commas. For example, you can add org.jbpm.services.task.deadlines.notifications.impl.email.EmailNotificationListener to exclude the default email notification listener. org.kie.jbpm.notification_listeners.include String Fully qualified names of event listeners that must be included. Separate multiple names with commas. If you set this property, only the listeners in this property are included and all other listeners are excluded.
[ "KieHelper kieHelper = new KieHelper(); KieBase kieBase = kieHelper .addResource(ResourceFactory.newClassPathResource(\"MyProcess.bpmn\")) .build();", "KieSession ksession = kbase.newKieSession(); ProcessInstance processInstance = ksession.startProcess(\"com.sample.MyProcess\");", "/** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. * * @param processId The ID of the process to start * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId); /** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. You can pass parameters * to the process instance as name-value pairs, and these parameters set * variables of the process instance. * * @param processId the ID of the process to start * @param parameters the process variables to set when starting the process instance * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId, Map<String, Object> parameters); /** * Signals the process engine that an event has occurred. The type parameter defines * the type of event and the event parameter can contain additional information * related to the event. All process instances that are listening to this type * of (external) event will be notified. For performance reasons, use this type of * event signaling only if one process instance must be able to notify * other process instances. For internal events within one process instance, use the * signalEvent method that also include the processInstanceId of the process instance * in question. * * @param type the type of event * @param event the data associated with this event */ void signalEvent(String type, Object event); /** * Signals the process instance that an event has occurred. The type parameter defines * the type of event and the event parameter can contain additional information * related to the event. All node instances inside the given process instance that * are listening to this type of (internal) event will be notified. Note that the event * will only be processed inside the given process instance. All other process instances * waiting for this type of event will not be notified. * * @param type the type of event * @param event the data associated with this event * @param processInstanceId the id of the process instance that should be signaled */ void signalEvent(String type, Object event, long processInstanceId); /** * Returns a collection of currently active process instances. Note that only process * instances that are currently loaded and active inside the process engine are returned. * When using persistence, it is likely not all running process instances are loaded * as their state is stored persistently. It is best practice not to use this * method to collect information about the state of your process instances but to use * a history log for that purpose. * * @return a collection of process instances currently active in the session */ Collection<ProcessInstance> getProcessInstances(); /** * Returns the process instance with the given ID. Note that only active process instances * are returned. If a process instance has been completed already, this method returns * null. * * @param id the ID of the process instance * @return the process instance with the given ID, or null if it cannot be found */ ProcessInstance getProcessInstance(long processInstanceId); /** * Aborts the process instance with the given ID. If the process instance has been completed * (or aborted), or if the process instance cannot be found, this method will throw an * IllegalArgumentException. * * @param id the ID of the process instance */ void abortProcessInstance(long processInstanceId); /** * Returns the WorkItemManager related to this session. This object can be used to * register new WorkItemHandlers or to complete (or abort) WorkItems. * * @return the WorkItemManager related to this session */ WorkItemManager getWorkItemManager();", "/** * Start a new process instance. Use the process (definition) that * is referenced by the given process ID. You can pass parameters * to the process instance (as name-value pairs), and these parameters set * variables of the process instance. * * @param processId the ID of the process to start * @param correlationKey custom correlation key that can be used to identify the process instance * @param parameters the process variables to set when starting the process instance * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Create a new process instance (but do not yet start it). Use the process * (definition) that is referenced by the given process ID. * You can pass to the process instance (as name-value pairs), * and these parameters set variables of the process instance. * Use this method if you need a reference to the process instance before actually * starting it. Otherwise, use startProcess. * * @param processId the ID of the process to start * @param correlationKey custom correlation key that can be used to identify the process instance * @param parameters the process variables to set when creating the process instance * @return the ProcessInstance that represents the instance of the process that was created (but not yet started) */ ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Returns the process instance with the given correlationKey. Note that only active process instances * are returned. If a process instance has been completed already, this method will return * null. * * @param correlationKey the custom correlation key assigned when the process instance was created * @return the process instance identified by the key or null if it cannot be found */ ProcessInstance getProcessInstance(CorrelationKey correlationKey);", "public interface RuntimeManager { /** * Returns a <code>RuntimeEngine</code> instance that is fully initialized: * <ul> * <li>KieSession is created or loaded depending on the strategy</li> * <li>TaskService is initialized and attached to the KIE session (through a listener)</li> * <li>WorkItemHandlers are initialized and registered on the KIE session</li> * <li>EventListeners (process, agenda, working memory) are initialized and added to the KIE session</li> * </ul> * @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code> * @return instance of the <code>RuntimeEngine</code> */ RuntimeEngine getRuntimeEngine(Context<?> context); /** * Unique identifier of the <code>RuntimeManager</code> * @return */ String getIdentifier(); /** * Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact. * This method should always be used to dispose <code>RuntimeEngine</code> that is not needed * anymore. <br/> * Do not use KieSession.dispose() used with RuntimeManager as it will break the internal * mechanisms of the manager responsible for clear and efficient disposal.<br/> * Disposing is not needed if <code>RuntimeEngine</code> was obtained within an active JTA transaction, * if the getRuntimeEngine method was invoked during active JTA transaction, then disposing of * the runtime engine will happen automatically on transaction completion. * @param runtime */ void disposeRuntimeEngine(RuntimeEngine runtime); /** * Closes <code>RuntimeManager</code> and releases its resources. Call this method when * a runtime manager is not needed anymore. Otherwise it will still be active and operational. */ void close(); }", "public interface RuntimeEngine { /** * Returns the <code>KieSession</code> configured for this <code>RuntimeEngine</code> * @return */ KieSession getKieSession(); /** * Returns the <code>TaskService</code> configured for this <code>RuntimeEngine</code> * @return */ TaskService getTaskService(); }", "// First, configure the environment to be used by RuntimeManager RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultInMemoryBuilder() .addAsset(ResourceFactory.newClassPathResource(\"BPMN2-ScriptTask.bpmn2\"), ResourceType.BPMN2) .get(); // Next, create the RuntimeManager - in this case the singleton strategy is chosen RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment); // Then get RuntimeEngine from the runtime manager, using an empty context because singleton does not keep track // of runtime engine as there is only one RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get()); // Get the KieSession from the RuntimeEngine - already initialized with all handlers, listeners, and other requirements // configured on the environment KieSession ksession = runtimeEngine.getKieSession(); // Add invocations of the process engine here, // for example, ksession.startProcess(processId); // Finally, dispose the runtime engine manager.disposeRuntimeEngine(runtimeEngine);", "public interface RuntimeEnvironment { /** * Returns <code>KieBase</code> that is to be used by the manager * @return */ KieBase getKieBase(); /** * KieSession environment that is to be used to create instances of <code>KieSession</code> * @return */ Environment getEnvironment(); /** * KieSession configuration that is to be used to create instances of <code>KieSession</code> * @return */ KieSessionConfiguration getConfiguration(); /** * Indicates if persistence is to be used for the KieSession instances * @return */ boolean usePersistence(); /** * Delivers a concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners * that is to be registered on instances of <code>KieSession</code> * @return */ RegisterableItemsFactory getRegisterableItemsFactory(); /** * Delivers a concrete implementation of <code>UserGroupCallback</code> that is to be registered on instances * of <code>TaskService</code> for managing users and groups. * @return */ UserGroupCallback getUserGroupCallback(); /** * Delivers a custom class loader that is to be used by the process engine and task service instances * @return */ ClassLoader getClassLoader(); /** * Closes the environment, permitting closing of all dependent components such as ksession factories */ void close();", "public interface RuntimeEnvironmentBuilder { public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled); public RuntimeEnvironmentBuilder entityManagerFactory(Object emf); public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type); public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value); public RuntimeEnvironmentBuilder addConfiguration(String name, String value); public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase); public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback); public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory); public RuntimeEnvironment get(); public RuntimeEnvironmentBuilder classLoader(ClassLoader cl); public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler);", "public interface RuntimeEnvironmentBuilderFactory { /** * Provides a completely empty <code>RuntimeEnvironmentBuilder</code> instance to manually * set all required components instead of relying on any defaults. * @return new instance of <code>RuntimeEnvironmentBuilder</code> */ public RuntimeEnvironmentBuilder newEmptyBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * but does not have persistence for the process engine configured so it will only store process instances in memory * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files * @param groupId group id of kjar * @param artifactId artifact id of kjar * @param version version number of kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the kbase and ksession settings in the KJAR * @param groupId group id of kjar * @param artifactId artifact id of kjar * @param version version number of kjar * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar * @param ksessionName name of the ksession define in kmodule.xml stored in kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the release ID defined in the KJAR * @param releaseId <code>ReleaseId</code> that described the kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * This method is tailored to work smoothly with KJAR files and use the kbase, ksession, and release ID settings in the KJAR * @param releaseId <code>ReleaseId</code> that described the kjar * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar * @param ksessionName name of the ksession define in kmodule.xml stored in kjar * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which * defines the kjar itself. * Expects to use default kbase and ksession from kmodule. * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(); /** * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on: * <ul> * <li>DefaultRuntimeEnvironment</li> * </ul> * It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which * defines the kjar itself. * @param kbaseName name of the kbase defined in kmodule.xml * @param ksessionName name of the ksession define in kmodule.xml * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults * * @see DefaultRuntimeEnvironment */ public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName);", "/** * Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally * @return map of handlers to be registered - in case of no handlers empty map shall be returned. */ Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime); /** * Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime); /** * Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime); /** * Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code> * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners empty list shall be returned. */ List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);", "drools.workItemHandlers = CustomWorkItemHandlers.conf", "[ \"Log\": new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler(), \"WebService\": new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession), \"Rest\": new org.jbpm.process.workitem.rest.RESTWorkItemHandler(), \"Service Task\" : new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession) ]", "public interface WorkItemHandlerProducer { /** * Returns a map of work items (key = work item name, value= work item handler instance) * to be registered on the KieSession * <br/> * The following parameters are accepted: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return map of work item handler instances (recommendation is to always return new instances when this method is invoked) */ Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params); }", "public interface EventListenerProducer<T> { /** * Returns a list of instances for given (T) type of listeners * <br/> * The following parameters are accepted: * <ul> * <li>ksession</li> * <li>taskService</li> * <li>runtimeManager</li> * </ul> * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out * and provide valid instances for given owner * @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances * @return list of listener instances (recommendation is to always return new instances when this method is invoked) */ List<T> getEventListeners(String identifier, Map<String, Object> params); }", "// Create deployment unit by providing the GAV of the KJAR DeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION); // Get the deploymentId for the deployed unit String deploymentId = deploymentUnit.getIdentifier(); // Deploy the unit deploymentService.deploy(deploymentUnit); // Retrieve the deployed unit DeployedUnit deployed = deploymentService.getDeployedUnit(deploymentId); // Get the runtime manager RuntimeManager manager = deployed.getRuntimeManager();", "public interface DeploymentService { void deploy(DeploymentUnit unit); void undeploy(DeploymentUnit unit); RuntimeManager getRuntimeManager(String deploymentUnitId); DeployedUnit getDeployedUnit(String deploymentUnitId); Collection<DeployedUnit> getDeployedUnits(); void activate(String deploymentId); void deactivate(String deploymentId); boolean isDeployed(String deploymentUnitId); }", "String processId = \"org.jbpm.writedocument\"; Collection<UserTaskDefinition> processTasks = bpmn2Service.getTasksDefinitions(deploymentUnit.getIdentifier(), processId); Map<String, String> processData = bpmn2Service.getProcessVariables(deploymentUnit.getIdentifier(), processId); Map<String, String> taskInputMappings = bpmn2Service.getTaskInputMappings(deploymentUnit.getIdentifier(), processId, \"Write a Document\" );", "public interface DefinitionService { ProcessDefinition buildProcessDefinition(String deploymentId, String bpmn2Content, ClassLoader classLoader, boolean cache) throws IllegalArgumentException; ProcessDefinition getProcessDefinition(String deploymentId, String processId); Collection<String> getReusableSubProcesses(String deploymentId, String processId); Map<String, String> getProcessVariables(String deploymentId, String processId); Map<String, String> getServiceTasks(String deploymentId, String processId); Map<String, Collection<String>> getAssociatedEntities(String deploymentId, String processId); Collection<UserTaskDefinition> getTasksDefinitions(String deploymentId, String processId); Map<String, String> getTaskInputMappings(String deploymentId, String processId, String taskName); Map<String, String> getTaskOutputMappings(String deploymentId, String processId, String taskName); }", "KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION); deploymentService.deploy(deploymentUnit); long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), \"customtask\"); ProcessInstance pi = processService.getProcessInstance(processInstanceId);", "public interface ProcessService { /** * Starts a process with no variables * * @param deploymentId deployment identifier * @param processId process identifier * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId); /** * Starts a process and sets variables * * @param deploymentId deployment identifier * @param processId process identifier * @param params process variables * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, Map<String, Object> params); /** * Starts a process with no variables and assigns a correlation key * * @param deploymentId deployment identifier * @param processId process identifier * @param correlationKey correlation key to be assigned to the process instance - must be unique * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey); /** * Starts a process, sets variables, and assigns a correlation key * * @param deploymentId deployment identifier * @param processId process identifier * @param correlationKey correlation key to be assigned to the process instance - must be unique * @param params process variables * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey, Map<String, Object> params); /** * Run a process that is designed to start and finish in a single transaction. * This method starts the process and returns when the process completes. * It returns the state of process variables at the outcome of the process * * @param deploymentId deployment identifier for the KJAR file of the process * @param processId process identifier * @param params process variables * @return the state of process variables at the end of the process */ Map<String, Object> computeProcessOutcome(String deploymentId, String processId, Map<String, Object> params); /** * Starts a process at the listed nodes, instead of the normal starting point. * This method can be used for restarting a process that was aborted. However, * it does not restore the context of a previous process instance. You must * supply all necessary variables when calling this method. * This method does not guarantee that the process is started in a valid state. * * @param deploymentId deployment identifier * @param processId process identifier * @param params process variables * @param nodeIds list of BPMN node identifiers where the process must start * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcessFromNodeIds(String deploymentId, String processId, Map<String, Object> params, String... nodeIds); /** * Starts a process at the listed nodes, instead of the normal starting point, * and assigns a correlation key. * This method can be used for restarting a process that was aborted. However, * it does not restore the context of a previous process instance. You must * supply all necessary variables when calling this method. * This method does not guarantee that the process is started in a valid state. * * @param deploymentId deployment identifier * @param processId process identifier * @param key correlation key (must be unique) * @param params process variables * @param nodeIds list of BPMN node identifiers where the process must start. * @return process instance IDentifier * @throws RuntimeException in case of encountered errors * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active */ Long startProcessFromNodeIds(String deploymentId, String processId, CorrelationKey key, Map<String, Object> params, String... nodeIds); /** * Aborts the specified process * * @param processInstanceId process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstance(Long processInstanceId); /** * Aborts the specified process * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstance(String deploymentId, Long processInstanceId); /** * Aborts all specified processes * * @param processInstanceIds list of process instance unique identifiers * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstances(List<Long> processInstanceIds); /** * Aborts all specified processes * * @param deploymentId deployment to which the process instance belongs * @param processInstanceIds list of process instance unique identifiers * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void abortProcessInstances(String deploymentId, List<Long> processInstanceIds); /** * Signals an event to a single process instance * * @param processInstanceId the process instance unique identifier * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstance(Long processInstanceId, String signalName, Object event); /** * Signals an event to a single process instance * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId the process instance unique identifier * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstance(String deploymentId, Long processInstanceId, String signalName, Object event); /** * Signal an event to a list of process instances * * @param processInstanceIds list of process instance unique identifiers * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstances(List<Long> processInstanceIds, String signalName, Object event); /** * Signal an event to a list of process instances * * @param deploymentId deployment to which the process instances belong * @param processInstanceIds list of process instance unique identifiers * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void signalProcessInstances(String deploymentId, List<Long> processInstanceIds, String signalName, Object event); /** * Signal an event to a single process instance by correlation key * * @param correlationKey the unique correlation key of the process instance * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found */ void signalProcessInstanceByCorrelationKey(CorrelationKey correlationKey, String signalName, Object event); /** * Signal an event to a single process instance by correlation key * * @param deploymentId deployment to which the process instance belongs * @param correlationKey the unique correlation key of the process instance * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found */ void signalProcessInstanceByCorrelationKey(String deploymentId, CorrelationKey correlationKey, String signalName, Object event); /** * Signal an event to given list of correlation keys * * @param correlationKeys list of unique correlation keys of process instances * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found */ void signalProcessInstancesByCorrelationKeys(List<CorrelationKey> correlationKeys, String signalName, Object event); /** * Signal an event to given list of correlation keys * * @param deploymentId deployment to which the process instances belong * @param correlationKeys list of unique correlation keys of process instances * @param signalName the ID of the signal in the process * @param event the event object to be passed in with the event * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found */ void signalProcessInstancesByCorrelationKeys(String deploymentId, List<CorrelationKey> correlationKeys, String signalName, Object event); /** * Signal an event to a any process instance that listens to a given signal and belongs to a given deployment * * @param deployment identifier of the deployment * @param signalName the ID of the signal in the process * @param event the event object to be passed with the event * @throws DeploymentNotFoundException in case the deployment unit was not found */ void signalEvent(String deployment, String signalName, Object event); /** * Returns process instance information. Will return null if no * active process with the ID is found * * @param processInstanceId The process instance unique identifier * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(Long processInstanceId); /** * Returns process instance information. Will return null if no * active process with the ID is found * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(String deploymentId, Long processInstanceId); /** * Returns process instance information. Will return null if no * active process with that correlation key is found * * @param correlationKey correlation key assigned to the process instance * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(CorrelationKey correlationKey); /** * Returns process instance information. Will return null if no * active process with that correlation key is found * * @param deploymentId deployment to which the process instance belongs * @param correlationKey correlation key assigned to the process instance * @return Process instance information * @throws DeploymentNotFoundException in case the deployment unit was not found */ ProcessInstance getProcessInstance(String deploymentId, CorrelationKey correlationKey); /** * Sets a process variable. * @param processInstanceId The process instance unique identifier * @param variableId The variable ID to set * @param value The variable value * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariable(Long processInstanceId, String variableId, Object value); /** * Sets a process variable. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @param variableId The variable id to set. * @param value The variable value. * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariable(String deploymentId, Long processInstanceId, String variableId, Object value); /** * Sets process variables. * * @param processInstanceId The process instance unique identifier * @param variables map of process variables (key = variable name, value = variable value) * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariables(Long processInstanceId, Map<String, Object> variables); /** * Sets process variables. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @param variables map of process variables (key = variable name, value = variable value) * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ void setProcessVariables(String deploymentId, Long processInstanceId, Map<String, Object> variables); /** * Gets a process instance variable. * * @param processInstanceId the process instance unique identifier * @param variableName the variable name to get from the process * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Object getProcessInstanceVariable(Long processInstanceId, String variableName); /** * Gets a process instance variable. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId the process instance unique identifier * @param variableName the variable name to get from the process * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Object getProcessInstanceVariable(String deploymentId, Long processInstanceId, String variableName); /** * Gets a process instance variable values. * * @param processInstanceId The process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Map<String, Object> getProcessInstanceVariables(Long processInstanceId); /** * Gets a process instance variable values. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId The process instance unique identifier * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ Map<String, Object> getProcessInstanceVariables(String deploymentId, Long processInstanceId); /** * Returns all signals available in current state of given process instance * * @param processInstanceId process instance ID * @return list of available signals or empty list if no signals are available */ Collection<String> getAvailableSignals(Long processInstanceId); /** * Returns all signals available in current state of given process instance * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID * @return list of available signals or empty list if no signals are available */ Collection<String> getAvailableSignals(String deploymentId, Long processInstanceId); /** * Completes the specified WorkItem with the given results * * @param id workItem ID * @param results results of the workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void completeWorkItem(Long id, Map<String, Object> results); /** * Completes the specified WorkItem with the given results * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @param results results of the workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void completeWorkItem(String deploymentId, Long processInstanceId, Long id, Map<String, Object> results); /** * Abort the specified workItem * * @param id workItem ID * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void abortWorkItem(Long id); /** * Abort the specified workItem * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ void abortWorkItem(String deploymentId, Long processInstanceId, Long id); /** * Returns the specified workItem * * @param id workItem ID * @return The specified workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ WorkItem getWorkItem(Long id); /** * Returns the specified workItem * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID to which the work item belongs * @param id workItem ID * @return The specified workItem * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws WorkItemNotFoundException in case a work item with the given ID was not found */ WorkItem getWorkItem(String deploymentId, Long processInstanceId, Long id); /** * Returns active work items by process instance ID. * * @param processInstanceId process instance ID * @return The list of active workItems for the process instance * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ List<WorkItem> getWorkItemByProcessInstance(Long processInstanceId); /** * Returns active work items by process instance ID. * * @param deploymentId deployment to which the process instance belongs * @param processInstanceId process instance ID * @return The list of active workItems for the process instance * @throws DeploymentNotFoundException in case the deployment unit was not found * @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found */ List<WorkItem> getWorkItemByProcessInstance(String deploymentId, Long processInstanceId); /** * Executes the provided command on the underlying command executor (usually KieSession) * @param deploymentId deployment identifier * @param command actual command for execution * @return results of the command execution * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process) */ public <T> T execute(String deploymentId, Command<T> command); /** * Executes the provided command on the underlying command executor (usually KieSession) * @param deploymentId deployment identifier * @param context context implementation to be used to get the runtime engine * @param command actual command for execution * @return results of the command execution * @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist * @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process) */ public <T> T execute(String deploymentId, Context<?> context, Command<T> command); }", "Collection definitions = runtimeDataService.getProcesses(new QueryContext());", "Collection<processinstancedesc> instances = runtimeDataService.getProcessInstances(new QueryContext());", "Collection<nodeinstancedesc> instances = runtimeDataService.getProcessInstanceHistoryActive(processInstanceId, new QueryContext());", "List<tasksummary> taskSummaries = runtimeDataService.getTasksAssignedAsPotentialOwner(\"john\", new QueryFilter(0, 10));", "public interface RuntimeDataService { /** * Represents type of node instance log entries * */ enum EntryType { START(0), END(1), ABORTED(2), SKIPPED(3), OBSOLETE(4), ERROR(5); } // Process instance information /** * Returns a list of process instance descriptions * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the available process instances */ Collection<ProcessInstanceDesc> getProcessInstances(QueryContext queryContext); /** * Returns a list of all process instance descriptions with the given statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param initiator the initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states and initiator) */ Collection<ProcessInstanceDesc> getProcessInstances(List<Integer> states, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process ID and statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param processId ID of the {@link Process} (definition) used when starting the process instance * @param initiator initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states, processId, and initiator) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessId(List<Integer> states, String processId, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process name and statuses and initiated by <code>initiator</code> * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param processName name (not ID) of the {@link Process} (definition) used when starting the process instance * @param initiator initiator of the {@link ProcessInstance} * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (states, processName and initiator) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessName(List<Integer> states, String processName, String initiator, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given deployment ID and statuses * @param deploymentId deployment ID of the runtime * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByDeploymentId(String deploymentId, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions found for the given processInstanceId. If no descriptions are found, null is returned. At the same time, the method * fetches all active tasks (in status: Ready, Reserved, InProgress) to provide the information about what user task is keeping each instance * and who owns the task (if the task is already claimed by a user) * @param processInstanceId ID of the process instance to be fetched * @return process instance information, in the form of a {@link ProcessInstanceDesc} instance */ ProcessInstanceDesc getProcessInstanceById(long processInstanceId); /** * Returns the active process instance description found for the given correlation key. If none is found, returns null. At the same time it * fetches all active tasks (in status: Ready, Reserved, InProgress) to provide information about which user task is keeping each instance * and who owns the task (if the task is already claimed by a user) * @param correlationKey correlation key assigned to the process instance * @return process instance information, in the form of a {@link ProcessInstanceDesc} instance */ ProcessInstanceDesc getProcessInstanceByCorrelationKey(CorrelationKey correlationKey); /** * Returns process instances descriptions (regardless of their states) found for the given correlation key. If no descriptions are found, an empty list is returned * This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching * is performed based on a 'starts with' criterion * @param correlationKey correlation key assigned to the process instance * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given correlation key */ Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKey(CorrelationKey correlationKey, QueryContext queryContext); /** * Returns process instance descriptions, filtered by their states, that were found for the given correlation key. If none are found, returns an empty list * This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching * is performed based on a 'starts with' criterion * @param correlationKey correlation key assigned to process instance * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given correlation key */ Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKeyAndStatus(CorrelationKey correlationKey, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process definition ID * @param processDefId ID of the process definition * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, QueryContext queryContext); /** * Returns a list of process instance descriptions found for the given process definition ID, filtered by state * @param processDefId ID of the process definition * @param states list of possible state (int) values that the {@link ProcessInstance} can have * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that match * the given criteria (deploymentId and states) */ Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions that match process instances that have the given variable defined, filtered by state * @param variableName name of the variable that process instance should have * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined */ Collection<ProcessInstanceDesc> getProcessInstancesByVariable(String variableName, List<Integer> states, QueryContext queryContext); /** * Returns process instance descriptions that match process instances that have the given variable defined and the value of the variable matches the given variableValue * @param variableName name of the variable that process instance should have * @param variableValue value of the variable to match * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined with the given value */ Collection<ProcessInstanceDesc> getProcessInstancesByVariableAndValue(String variableName, String variableValue, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions that have the specified parent * @param parentProcessInstanceId ID of the parent process instance * @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessInstanceDesc} instances representing the available process instances */ Collection<ProcessInstanceDesc> getProcessInstancesByParent(Long parentProcessInstanceId, List<Integer> states, QueryContext queryContext); /** * Returns a list of process instance descriptions that are subprocesses of the specified process, or subprocesses of those subprocesses, and so on. The list includes the full hierarchy of subprocesses under the specified parent process * @param processInstanceId ID of the parent process instance * @return list of {@link ProcessInstanceDesc} instances representing the full hierarchy of this process */ Collection<ProcessInstanceDesc> getProcessInstancesWithSubprocessByProcessInstanceId(Long processInstanceId, List<Integer> states, QueryContext queryContext); // Node and Variable instance information /** * Returns the active node instance descriptor for the given work item ID, if the work item exists and is active * @param workItemId identifier of the work item * @return NodeInstanceDesc for work item if it exists and is still active, otherwise null is returned */ NodeInstanceDesc getNodeInstanceForWorkItem(Long workItemId); /** * Returns a trace of all active nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return */ Collection<NodeInstanceDesc> getProcessInstanceHistoryActive(long processInstanceId, QueryContext queryContext); /** * Returns a trace of all executed (completed) nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return */ Collection<NodeInstanceDesc> getProcessInstanceHistoryCompleted(long processInstanceId, QueryContext queryContext); /** * Returns a complete trace of all executed (completed) and active nodes for the given process instance ID * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return {@link NodeInstance} information, in the form of a list of {@link NodeInstanceDesc} instances, * that come from a process instance that matches the given criteria (deploymentId, processId) */ Collection<NodeInstanceDesc> getProcessInstanceFullHistory(long processInstanceId, QueryContext queryContext); /** * Returns a complete trace of all events of the given type (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR) for the given process instance * @param processInstanceId unique identifier of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @param type type of events to be returned (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR). To return all events, use {@link #getProcessInstanceFullHistory(long, QueryContext)} * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getProcessInstanceFullHistoryByType(long processInstanceId, EntryType type, QueryContext queryContext); /** * Returns a trace of all nodes for the given node types and process instance ID * @param processInstanceId unique identifier of the process instance * @param nodeTypes list of node types to filter nodes of the process instance * @param queryContext control parameters for the result, such as sorting and paging * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getNodeInstancesByNodeType(long processInstanceId, List<String> nodeTypes, QueryContext queryContext); /** * Returns a trace of all nodes for the given node types and correlation key * @param correlationKey correlation key * @param states list of states * @param nodeTypes list of node types to filter nodes of process instance * @param queryContext control parameters for the result, such as sorting and paging * @return collection of node instance descriptions */ Collection<NodeInstanceDesc> getNodeInstancesByCorrelationKeyNodeType(CorrelationKey correlationKey, List<Integer> states, List<String> nodeTypes, QueryContext queryContext); /** * Returns a collection of all process variables and their current values for the given process instance * @param processInstanceId process instance ID * @return information about variables in the specified process instance, * represented by a list of {@link VariableDesc} instances */ Collection<VariableDesc> getVariablesCurrentState(long processInstanceId); /** * Returns a collection of changes to the given variable within the scope of a process instance * @param processInstanceId unique identifier of the process instance * @param variableId ID of the variable * @param queryContext control parameters for the result, such as sorting and paging * @return information about the variable with the given ID in the specified process instance, * represented by a list of {@link VariableDesc} instances */ Collection<VariableDesc> getVariableHistory(long processInstanceId, String variableId, QueryContext queryContext); // Process information /** * Returns a list of process definitions for the given deployment ID * @param deploymentId deployment ID of the runtime * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessDefinition} instances representing processes that match * the given criteria (deploymentId) */ Collection<ProcessDefinition> getProcessesByDeploymentId(String deploymentId, QueryContext queryContext); /** * Returns a list of process definitions that match the given filter * @param filter regular expression * @param queryContext control parameters for the result, such as sorting and paging * @return list of {@link ProcessDefinition} instances with a name or ID that matches the given regular expression */ Collection<ProcessDefinition> getProcessesByFilter(String filter, QueryContext queryContext); /** * Returns all process definitions available * @param queryContext control parameters for the result, such as sorting and paging * @return list of all available processes, in the form a of a list of {@link ProcessDefinition} instances */ Collection<ProcessDefinition> getProcesses(QueryContext queryContext); /** * Returns a list of process definition identifiers for the given deployment ID * @param deploymentId deployment ID of the runtime * @param queryContext control parameters for the result, such as sorting and paging * @return list of all available process id's for a particular deployment/runtime */ Collection<String> getProcessIds(String deploymentId, QueryContext queryContext); /** * Returns process definitions for the given process ID regardless of the deployment * @param processId ID of the process * @return collection of {@link ProcessDefinition} instances representing the {@link Process} * with the specified process ID */ Collection<ProcessDefinition> getProcessesById(String processId); /** * Returns the process definition for the given deployment and process identifiers * @param deploymentId ID of the deployment (runtime) * @param processId ID of the process * @return {@link ProcessDefinition} instance, representing the {@link Process} * that is present in the specified deployment with the specified process ID */ ProcessDefinition getProcessesByDeploymentIdProcessId(String deploymentId, String processId); // user task query operations /** * Return a task by its workItemId * @param workItemId * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskByWorkItemId(Long workItemId); /** * Return a task by its taskId * @param taskId * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskById(Long taskId); /** * Return a task by its taskId with SLA data if the withSLA param is true * @param taskId * @param withSLA * @return @{@link UserTaskInstanceDesc} task */ UserTaskInstanceDesc getTaskById(Long taskId, boolean withSLA); /** * Return a list of assigned tasks for a Business Administrator user. Business * administrators play the same role as task stakeholders but at task type * level. Therefore, business administrators can perform the exact same * operations as task stakeholders. Business administrators can also observe * the progress of notifications * * @param userId identifier of the Business Administrator user * @param filter filter for the list of assigned tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsBusinessAdministrator(String userId, QueryFilter filter); /** * Return a list of assigned tasks for a Business Administrator user for with one of the listed * statuses * @param userId identifier of the Business Administrator user * @param statuses the statuses of the tasks to return * @param filter filter for the list of assigned tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsBusinessAdministratorByStatus(String userId, List<Status> statuses, QueryFilter filter); /** * Return a list of tasks that a user is eligible to own * * @param userId identifier of the user * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, QueryFilter filter); /** * Return a list of tasks the user or user groups are eligible to own * * @param userId identifier of the user * @param groupIds a list of identifiers of the groups * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, QueryFilter filter); /** * Return a list of tasks the user is eligible to own and that are in one of the listed * statuses * * @param userId identifier of the user * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status> status, QueryFilter filter); /** * Return a list of tasks the user or groups are eligible to own and that are in one of the listed * statuses * @param userId identifier of the user * @param groupIds filter for the identifiers of the groups * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, List<Status> status, QueryFilter filter); /** * Return a list of tasks the user is eligible to own, that are in one of the listed * statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set * will also be included in the result set * * @param userId identifier of the user * @param status filter for the task statuses * @param from earliest expiration date for the tasks * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksAssignedAsPotentialOwnerByExpirationDateOptional(String userId, List<Status> status, Date from, QueryFilter filter); /** * Return a list of tasks the user has claimed, that are in one of the listed * statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set * will also be included in the result set * * @param userId identifier of the user * @param strStatuses filter for the task statuses * @param from earliest expiration date for the tasks * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwnedByExpirationDateOptional(String userId, List<Status> strStatuses, Date from, QueryFilter filter); /** * Return a list of tasks the user has claimed * * @param userId identifier of the user * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwned(String userId, QueryFilter filter); /** * Return a list of tasks the user has claimed with one of the listed * statuses * * @param userId identifier of the user * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksOwnedByStatus(String userId, List<Status> status, QueryFilter filter); /** * Get a list of tasks the Process Instance is waiting on * * @param processInstanceId identifier of the process instance * @return list of task identifiers */ List<Long> getTasksByProcessInstanceId(Long processInstanceId); /** * Get filter for the tasks the Process Instance is waiting on that are in one of the * listed statuses * * @param processInstanceId identifier of the process instance * @param status filter for the task statuses * @param filter filter for the list of tasks * @return list of @{@link TaskSummary} task summaries */ List<TaskSummary> getTasksByStatusByProcessInstanceId(Long processInstanceId, List<Status> status, QueryFilter filter); /** * Get a list of task audit logs for all tasks owned by the user, applying a query filter to the list of tasks * * * @param userId identifier of the user that owns the tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} task audit logs */ List<AuditTask> getAllAuditTask(String userId, QueryFilter filter); /** * Get a list of task audit logs for all tasks that are active and owned by the user, applying a query filter to the list of tasks * * @param userId identifier of the user that owns the tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllAuditTaskByStatus(String userId, QueryFilter filter); /** * Get a list of task audit logs for group tasks (actualOwner == null) for the user, applying a query filter to the list of tasks * * @param userId identifier of the user that is associated with the group tasks * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllGroupAuditTask(String userId, QueryFilter filter); /** * Get a list of task audit logs for tasks that are assigned to a Business Administrator user, applying a query filter to the list of tasks * * @param userId identifier of the Business Administrator user * @param filter filter for the list of tasks * @return list of @{@link AuditTask} audit tasks */ List<AuditTask> getAllAdminAuditTask(String userId, QueryFilter filter); /** * Gets a list of task events for the given task * @param taskId identifier of the task * @param filter for the list of events * @return list of @{@link TaskEvent} task events */ List<TaskEvent> getTaskEvents(long taskId, QueryFilter filter); /** * Query on {@link TaskSummary} instances * @param userId the user associated with the tasks queried * @return {@link TaskSummaryQueryBuilder} used to create the query */ TaskSummaryQueryBuilder taskSummaryQuery(String userId); /** * Gets a list of {@link TaskSummary} instances for tasks that define a given variable * @param userId the ID of the user associated with the tasks * @param variableName the name of the task variable * @param statuses the list of statuses that the task can have * @param queryContext the query context * @return a {@link List} of {@link TaskSummary} instances */ List<TaskSummary> getTasksByVariable(String userId, String variableName, List<Status> statuses, QueryContext queryContext); /** * Gets a list of {@link TaskSummary} instances for tasks that define a given variable and the variable is set to the given value * @param userId the ID of the user associated with the tasks * @param variableName the name of the task variable * @param variableValue the value of the task variable * @param statuses the list of statuses that the task can have * @param context the query context * @return a {@link List} of {@link TaskSummary} instances */ List<TaskSummary> getTasksByVariableAndValue(String userId, String variableName, String variableValue, List<Status> statuses, QueryContext context); }", "long processInstanceId = processService.startProcess(deployUnit.getIdentifier(), \"org.jbpm.writedocument\"); List<Long> taskIds = runtimeDataService.getTasksByProcessInstanceId(processInstanceId); Long taskId = taskIds.get(0); userTaskService.start(taskId, \"john\"); UserTaskInstanceDesc task = runtimeDataService.getTaskById(taskId); Map<String, Object> results = new HashMap<String, Object>(); results.put(\"Result\", \"some document data\"); userTaskService.complete(taskId, \"john\", results);", "#============================================================================ Configure Main Scheduler Properties #============================================================================ org.quartz.scheduler.instanceName = jBPMClusteredScheduler org.quartz.scheduler.instanceId = AUTO #============================================================================ Configure ThreadPool #============================================================================ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 5 org.quartz.threadPool.threadPriority = 5 #============================================================================ Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate org.quartz.jobStore.useProperties=false org.quartz.jobStore.dataSource=managedDS org.quartz.jobStore.nonManagedTXDataSource=nonManagedDS org.quartz.jobStore.tablePrefix=QRTZ_ org.quartz.jobStore.isClustered=true org.quartz.jobStore.clusterCheckinInterval = 20000 #========================================================================= Configure Datasources #========================================================================= org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psbpmsDS org.quartz.dataSource.nonManagedDS.jndiURL=jboss/datasources/quartzNonManagedDS", "queryService.query(\"my query def\", new NamedQueryMapper<Collection<ProcessInstanceDesc>>(\"ProcessInstances\"), new QueryContext());", "public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> { private Map<String, Object> parameters; private boolean built = false; public TestQueryParamBuilder(Map<String, Object> parameters) { this.parameters = parameters; } @Override public ColumnFilter build() { // return null if it was already invoked if (built) { return null; } String columnName = \"processInstanceId\"; ColumnFilter filter = FilterFactory.OR( FilterFactory.greaterOrEqualsTo((Long)parameters.get(\"min\")), FilterFactory.lowerOrEqualsTo((Long)parameters.get(\"max\"))); filter.setColumnId(columnName); built = true; return filter; } }", "queryService.query(\"my query def\", ProcessInstanceQueryMapper.get(), new QueryContext(), paramBuilder);", "SqlQueryDefinition query = new SqlQueryDefinition(\"getAllProcessInstances\", \"java:jboss/datasources/ExampleDS\"); query.setExpression(\"select * from processinstancelog\");", "queryService.registerQuery(query);", "Collection<ProcessInstanceDesc> instances = queryService.query(\"getAllProcessInstances\", ProcessInstanceQueryMapper.get(), new QueryContext());", "QueryContext ctx = new QueryContext(0, 100, \"start_date\", true); Collection<ProcessInstanceDesc> instances = queryService.query(\"getAllProcessInstances\", ProcessInstanceQueryMapper.get(), ctx);", "// single filter param Collection<ProcessInstanceDesc> instances = queryService.query(\"getAllProcessInstances\", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, \"org.jbpm%\")); // multiple filter params (AND) Collection<ProcessInstanceDesc> instances = queryService.query(\"getAllProcessInstances\", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, \"org.jbpm%\"), QueryParam.in(COLUMN_STATUS, 1, 3));", "public interface AdvanceRuntimeDataService { String TASK_ATTR_NAME = \"TASK_NAME\"; String TASK_ATTR_OWNER = \"TASK_OWNER\"; String TASK_ATTR_STATUS = \"TASK_STATUS\"; String PROCESS_ATTR_INSTANCE_ID = \"PROCESS_INSTANCE_ID\"; String PROCESS_ATTR_CORRELATION_KEY = \"PROCESS_CORRELATION_KEY\"; String PROCESS_ATTR_DEFINITION_ID = \"PROCESS_DEFINITION_ID\"; String PROCESS_ATTR_DEPLOYMENT_ID = \"PROCESS_DEPLOYMENT_ID\"; String PROCESS_COLLECTION_VARIABLES = \"ATTR_COLLECTION_VARIABLES\"; List<ProcessInstanceWithVarsDesc> queryProcessByVariables(List<QueryParam> attributes, List<QueryParam> processVariables, QueryContext queryContext); List<ProcessInstanceWithVarsDesc> queryProcessByVariablesAndTask(List<QueryParam> attributes, List<QueryParam> processVariables, List<QueryParam> taskVariables, List<String> potentialOwners, QueryContext queryContext); List<UserTaskInstanceWithPotOwnerDesc> queryUserTasksByVariables(List<QueryParam> attributes, List<QueryParam> taskVariables, List<QueryParam> processVariables, List<String> potentialOwners, QueryContext queryContext); }", "public interface ProcessInstanceMigrationService { /** * Migrates a given process instance that belongs to the source deployment into the target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instance to be migrated belongs * @param processInstanceId ID of the process instance to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instance should be migrated * @return returns complete migration report */ MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId); /** * Migrates a given process instance (with node mapping) that belongs to source deployment into the target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instance to be migrated belongs * @param processInstanceId ID of the process instance to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instance should be migrated * @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes * @return returns complete migration report */ MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping); /** * Migrates given process instances that belong to the source deployment into a target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instances to be migrated belong * @param processInstanceIds list of process instance IDs to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instances should be migrated * @return returns complete migration report */ List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId); /** * Migrates given process instances (with node mapping) that belong to the source deployment into a target process ID that belongs to the target deployment. * The following rules are enforced: * <ul> * <li>the source deployment ID must point to an existing deployment</li> * <li>the process instance ID must point to an existing and active process instance</li> * <li>the target deployment must exist</li> * <li>the target process ID must exist in the target deployment</li> * </ul> * Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration. * @param sourceDeploymentId deployment to which the process instances to be migrated belong * @param processInstanceIds list of process instance ID to be migrated * @param targetDeploymentId ID of the deployment to which the target process belongs * @param targetProcessId ID of the process to which the process instances should be migrated * @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes * @return returns list of migration reports one per each process instance */ List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping); }", "public interface ProcessAdminServicesClient { MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId); MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping); List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId); List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping); }", "import org.kie.server.api.model.admin.MigrationReportInstance; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; public class ProcessInstanceMigrationTest{ private static final String SOURCE_CONTAINER = \"com.redhat:MigrateMe:1.0\"; private static final String SOURCE_PROCESS_ID = \"MigrateMe.MigrateMev1\"; private static final String TARGET_CONTAINER = \"com.redhat:MigrateMe:2\"; private static final String TARGET_PROCESS_ID = \"MigrateMe.MigrateMeV2\"; public static void main(String[] args) { KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(\"http://HOST:PORT/kie-server/services/rest/server\", \"USERNAME\", \"PASSWORD\"); config.setMarshallingFormat(MarshallingFormat.JSON); KieServicesClient client = KieServicesFactory.newKieServicesClient(config); long sourcePid = client.getProcessClient().startProcess(SOURCE_CONTAINER, SOURCE_PROCESS_ID); // Use the 'report' object to return migration results. MigrationReportInstance report = client.getAdminClient().migrateProcessInstance(SOURCE_CONTAINER, sourcePid,TARGET_CONTAINER, TARGET_PROCESS_ID); System.out.println(\"Was migration successful:\" + report.isSuccessful()); client.getProcessClient().abortProcessInstance(TARGET_CONTAINER, sourcePid); } }", "org.jbpm.HR:latest", "KModuleDeploymentUnit deploymentUnitV1 = new KModuleDeploymentUnit(\"org.jbpm\", \"HR\", \"1.0\"); deploymentService.deploy(deploymentUnitV1); long processInstanceId = processService.startProcess(\"org.jbpm:HR:LATEST\", \"customtask\"); ProcessInstanceDesc piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); // We have started a process with the project version 1 assertEquals(deploymentUnitV1.getIdentifier(), piDesc.getDeploymentId()); // Next we deploy version 2 KModuleDeploymentUnit deploymentUnitV2 = new KModuleDeploymentUnit(\"org.jbpm\", \"HR\", \"2.0\"); deploymentService.deploy(deploymentUnitV2); processInstanceId = processService.startProcess(\"org.jbpm:HR:LATEST\", \"customtask\"); piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); // This time we have started a process with the project version 2 assertEquals(deploymentUnitV2.getIdentifier(), piDesc.getDeploymentId());", "TransactionalCommandService commandService = new TransactionalCommandService(emf); DeploymentStore store = new DeploymentStore(); store.setCommandService(commandService); DeploymentSynchronizer sync = new DeploymentSynchronizer(); sync.setDeploymentService(deploymentService); sync.setDeploymentStore(store); DeploymentSyncInvoker invoker = new DeploymentSyncInvoker(sync, 2L, 3L, TimeUnit.SECONDS); invoker.start(); . invoker.stop();", "public interface ProcessEventListener extends EventListener { /** * This listener method is invoked right before a process instance is being started. * @param event */ void beforeProcessStarted(ProcessStartedEvent event); /** * This listener method is invoked right after a process instance has been started. * @param event */ void afterProcessStarted(ProcessStartedEvent event); /** * This listener method is invoked right before a process instance is being completed (or aborted). * @param event */ void beforeProcessCompleted(ProcessCompletedEvent event); /** * This listener method is invoked right after a process instance has been completed (or aborted). * @param event */ void afterProcessCompleted(ProcessCompletedEvent event); /** * This listener method is invoked right before a node in a process instance is being triggered * (which is when the node is being entered, for example when an incoming connection triggers it). * @param event */ void beforeNodeTriggered(ProcessNodeTriggeredEvent event); /** * This listener method is invoked right after a node in a process instance has been triggered * (which is when the node was entered, for example when an incoming connection triggered it). * @param event */ void afterNodeTriggered(ProcessNodeTriggeredEvent event); /** * This listener method is invoked right before a node in a process instance is being left * (which is when the node is completed, for example when it has performed the task it was * designed for). * @param event */ void beforeNodeLeft(ProcessNodeLeftEvent event); /** * This listener method is invoked right after a node in a process instance has been left * (which is when the node was completed, for example when it performed the task it was * designed for). * @param event */ void afterNodeLeft(ProcessNodeLeftEvent event); /** * This listener method is invoked right before the value of a process variable is being changed. * @param event */ void beforeVariableChanged(ProcessVariableChangedEvent event); /** * This listener method is invoked right after the value of a process variable has been changed. * @param event */ void afterVariableChanged(ProcessVariableChangedEvent event); /** * This listener method is invoked right before a process/node instance's SLA has been violated. * @param event */ default void beforeSLAViolated(SLAViolatedEvent event) {} /** * This listener method is invoked right after a process/node instance's SLA has been violated. * @param event */ default void afterSLAViolated(SLAViolatedEvent event) {} /** * This listener method is invoked when a signal is sent * @param event */ default void onSignal(SignalEvent event) {} /** * This listener method is invoked when a message is sent * @param event */ default void onMessage(MessageEvent event) {} }", "WorkflowProcessInstance processInstance = event.getNodeInstance().getProcessInstance() NodeType nodeType = event.getNodeInstance().getNode().getNodeType()", "public interface TaskLifeCycleEventListener extends EventListener { public enum AssignmentType { POT_OWNER, EXCL_OWNER, ADMIN; } public void beforeTaskActivatedEvent(TaskEvent event); public void beforeTaskClaimedEvent(TaskEvent event); public void beforeTaskSkippedEvent(TaskEvent event); public void beforeTaskStartedEvent(TaskEvent event); public void beforeTaskStoppedEvent(TaskEvent event); public void beforeTaskCompletedEvent(TaskEvent event); public void beforeTaskFailedEvent(TaskEvent event); public void beforeTaskAddedEvent(TaskEvent event); public void beforeTaskExitedEvent(TaskEvent event); public void beforeTaskReleasedEvent(TaskEvent event); public void beforeTaskResumedEvent(TaskEvent event); public void beforeTaskSuspendedEvent(TaskEvent event); public void beforeTaskForwardedEvent(TaskEvent event); public void beforeTaskDelegatedEvent(TaskEvent event); public void beforeTaskNominatedEvent(TaskEvent event); public default void beforeTaskUpdatedEvent(TaskEvent event){}; public default void beforeTaskReassignedEvent(TaskEvent event){}; public default void beforeTaskNotificationEvent(TaskEvent event){}; public default void beforeTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void beforeTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void beforeTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public default void beforeTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public void afterTaskActivatedEvent(TaskEvent event); public void afterTaskClaimedEvent(TaskEvent event); public void afterTaskSkippedEvent(TaskEvent event); public void afterTaskStartedEvent(TaskEvent event); public void afterTaskStoppedEvent(TaskEvent event); public void afterTaskCompletedEvent(TaskEvent event); public void afterTaskFailedEvent(TaskEvent event); public void afterTaskAddedEvent(TaskEvent event); public void afterTaskExitedEvent(TaskEvent event); public void afterTaskReleasedEvent(TaskEvent event); public void afterTaskResumedEvent(TaskEvent event); public void afterTaskSuspendedEvent(TaskEvent event); public void afterTaskForwardedEvent(TaskEvent event); public void afterTaskDelegatedEvent(TaskEvent event); public void afterTaskNominatedEvent(TaskEvent event); public default void afterTaskReassignedEvent(TaskEvent event){}; public default void afterTaskUpdatedEvent(TaskEvent event){}; public default void afterTaskNotificationEvent(TaskEvent event){}; public default void afterTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void afterTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){}; public default void afterTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; public default void afterTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){}; }", "void addEventListener(AgendaEventListener listener); void addEventListener(RuleRuntimeEventListener listener); void removeEventListener(AgendaEventListener listener); void removeEventListener(RuleRuntimeEventListener listener); Collection<AgendaEventListener> getAgendaEventListeners(); Collection<RuleRuntimeEventListener> getRuleRintimeEventListeners();", "import org.kie.api.KieServices; import org.kie.api.logger.KieRuntimeLogger; KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, \"test\"); // add invocations to the process engine here, // e.g. ksession.startProcess(processId); logger.close();" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/processengine-core-con_process-engine
13.6. OpenLDAP Setup Overview
13.6. OpenLDAP Setup Overview This section provides a quick overview for installing and configuring an OpenLDAP directory. For more details, refer to the following URLs: http://www.openldap.org/doc/admin/quickstart.html - The Quick-Start Guide on the OpenLDAP website. http://www.redhat.com/mirrors/LDP/HOWTO/LDAP-HOWTO.html - The LDAP Linux HOWTO from the Linux Documentation Project, mirrored on Red Hat's website. The basic steps for creating an LDAP server are as follows: Install the openldap , openldap-servers , and openldap-clients RPMs. Edit the /etc/openldap/slapd.conf file to specify the LDAP domain and server. Refer to Section 13.6.1, "Editing /etc/openldap/slapd.conf " for more information. Start slapd with the command: After configuring LDAP, use chkconfig , /usr/sbin/ntsysv , or the Services Configuration Tool to configure LDAP to start at boot time. For more information about configuring services, refer to the chapter titled Controlling Access to Services in the System Administrators Guide . Add entries to an LDAP directory with ldapadd . Use ldapsearch to determine if slapd is accessing the information correctly. At this point, the LDAP directory should be functioning properly and can be configured with LDAP-enabled applications. 13.6.1. Editing /etc/openldap/slapd.conf To use the slapd LDAP server, modify its configuration file, /etc/openldap/slapd.conf , to specify the correct domain and server. The suffix line names the domain for which the LDAP server provides information and should be changed from: so that it reflects a fully qualified domain name. For example: The rootdn entry is the Distinguished Name ( DN ) for a user who is unrestricted by access controls or administrative limit parameters set for operations on the LDAP directory. The rootdn user can be thought of as the root user for the LDAP directory. In the configuration file, change the rootdn line from its default value as in the following example: When populating an LDAP directory over a network, change the rootpw line - replacing the default value with an encrypted password string. To create an encrypted password string, type the following command: When prompted, type and then re-type a password. The program prints the resulting encrypted password to the shell prompt. , copy the newly created encrypted password into the /etc/openldap/slapd.conf on one of the rootpw lines and remove the hash mark ( # ). When finished, the line should look similar to the following example: Warning LDAP passwords, including the rootpw directive specified in /etc/openldap/slapd.conf , are sent over the network unencrypted , unless TLS encryption is enabled. To enable TLS encryption, review the comments in /etc/openldap/slapd.conf and refer to the man page for slapd.conf . For added security, the rootpw directive should be commented out after populating the LDAP directory by preceding it with a hash mark ( # ). When using the /usr/sbin/slapadd command line tool locally to populate the LDAP directory, use of the rootpw directive is not necessary. Important Only the root user can use /usr/sbin/slapadd . However, the directory server runs as the ldap user. Therefore, the directory server is unable to modify any files created by slapadd . To correct this issue, after using slapadd , type the following command:
[ "service ldap start", "suffix \"dc=your-domain,dc=com\"", "suffix \"dc=example,dc=com\"", "rootdn \"cn=root,dc=example,dc=com\"", "slappasswd", "rootpw {SSHA}vv2y+i6V6esazrIv70xSSnNAJE18bb2u", "chown -R ldap /var/lib/ldap" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ldap-quickstart
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_red_hat_insights_on_existing_rhel_systems_managed_by_red_hat_cloud_access/proc-providing-feedback-on-redhat-documentation
function::int_arg
function::int_arg Name function::int_arg - Return function argument as signed int Synopsis Arguments n index of argument to return Description Return the value of argument n as a signed int (i.e., a 32-bit integer sign-extended to 64 bits).
[ "int_arg:long(n:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-int-arg
20.2. Setting Password Administrators
20.2. Setting Password Administrators The Directory Manager can add the password administrator role to a user or a group of users. Since access control instructions (ACI) need to be set, it is recommended that a group is used to allow just a single ACI set to manage all password administrators. A password administrator can perform any user password operations, including the following: forcing the user to change their password, changing a user's password to a different storage scheme defined in the password policy, bypassing the password syntax checks, and adding already hashed passwords. As explained in Section 20.1, "Setting User Passwords" , it is recommended that ordinary password updates are done by an existing role in the database with permissions to update only the userPassword attribute. Red Hat recommends not to use the password administrator account for these ordinary tasks. You can specify a user or a group as password administrator: In a local policy. For example: In a global policy. For example: Note You can add a new passwordAdminSkipInfoUpdate: on/off setting under the cn=config entry to provide a fine grained control over password updates performed by password administrators. When you enable this setting, passwords updates do not update certain attributes, for example, passwordHistory , passwordExpirationTime , passwordRetryCount , pwdReset , and passwordExpWarned .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp set ou=people,dc=example,dc=com --pwdadmin \" cn=password_admins,ou=groups,dc=example,dc=com \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --pwdadmin \" cn=password_admins,ou=groups,dc=example,dc=com \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/password_administrators
10.5.60. Location
10.5.60. Location The <Location> and </Location> tags create a container in which access control based on URL can be specified. For instance, to allow people connecting from within the server's domain to see status reports, use the following directives: Replace <.example.com> with the second-level domain name for the Web server. To provide server configuration reports (including installed modules and configuration directives) to requests from inside the domain, use the following directives: Again, replace <.example.com> with the second-level domain name for the Web server.
[ "<Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from <.example.com> </Location>", "<Location /server-info> SetHandler server-info Order deny,allow Deny from all Allow from <.example.com> </Location>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-location
probe::netfilter.arp.out
probe::netfilter.arp.out Name probe::netfilter.arp.out - - Called for each outgoing ARP packet Synopsis netfilter.arp.out Values ar_tip Ethernet+IP only (ar_pro==0x800): target IP address nf_drop Constant used to signify a 'drop' verdict ar_pro Format of protocol address ar_sip Ethernet+IP only (ar_pro==0x800): source IP address indev Address of net_device representing input device, 0 if unknown ar_sha Ethernet+IP only (ar_pro==0x800): source hardware (MAC) address pf Protocol family -- always " arp " ar_op ARP opcode (command) nf_queue Constant used to signify a 'queue' verdict ar_hrd Format of hardware address nf_accept Constant used to signify an 'accept' verdict ar_data Address of ARP packet data region (after the header) ar_tha Ethernet+IP only (ar_pro==0x800): target hardware (MAC) address outdev_name Name of network device packet will be routed to (if known) nf_stop Constant used to signify a 'stop' verdict ar_hln Length of hardware address ar_pln Length of protocol address nf_stolen Constant used to signify a 'stolen' verdict length The length of the packet buffer contents, in bytes outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict arphdr Address of ARP header indev_name Name of network device packet was received on (if known)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netfilter-arp-out
Chapter 25. file
Chapter 25. file The path to the log file from which the collector reads this log entry. Normally, this is a path in the /var/log file system of a cluster node. Data type text
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/file
Chapter 8. Installing Using Anaconda
Chapter 8. Installing Using Anaconda This chapter provides step-by-step instructions for installing Red Hat Enterprise Linux using the Anaconda installer. The bulk of this chapter describes installation using the graphical user interface. A text mode is also available for systems with no graphical display, but this mode is limited in certain aspects (for example, custom partitioning is not possible in text mode). If your system does not have the ability to use the graphical mode, you can: Use Kickstart to automate the installation as described in Chapter 27, Kickstart Installations Perform the graphical installation remotely by connecting to the installation system from another computer with a graphical display using the VNC (Virtual Network Computing) protocol - see Chapter 25, Using VNC 8.1. Introduction to Anaconda The Red Hat Enterprise Linux installer, Anaconda , is different from most other operating system installation programs due to its parallel nature. Most installers follow a fixed path: you must choose your language first, then you configure network, then installation type, then partitioning, and so on. There is usually only one way to proceed at any given time. In Anaconda you are only required to select your language and locale first, and then you are presented with a central screen, where you can configure most aspects of the installation in any order you like. This does not apply to all parts of the installation process, however - for example, when installing from a network location, you must configure the network before you can select which packages to install. Some screens will be automatically configured depending on your hardware and the type of media you used to start the installation. You can still change the detected settings in any screen. Screens which have not been automatically configured, and therefore require your attention before you begin the installation, are marked by an exclamation mark. You cannot start the actual installation process before you finish configuring these settings. Additional differences appear in certain screens; notably the custom partitioning process is very different from other Linux distributions. These differences are described in each screen's subsection.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-installing-using-anaconda-x86
25.7.4. Using RELP
25.7.4. Using RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data logging in computer networks. It is designed to provide reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. To configure RELP, first install the rsyslog-relp package both on the server and the client: Then, configure both the server and the client. To configure the client, configure: loading the required modules the TCP input port the transport settings by adding the following configuration to the /etc/rsyslog.conf file: Replace port to start a listener at the required port. Replace target_IP and target_port with the IP address and port that identify the target server. To configure the server: configure loading the modules configure the TCP input similarly to the client configuration configure the rules and choose an action to be performed by adding the following configuration to the /etc/rsyslog.conf file: Replace target_port with the same value as on the clients. In the example, log_path specifies the path for storing messages.
[ "~]# yum install rsyslog-relp", "USDModLoad omrelp USDModLoad imuxsock USDModLoad imtcp USDInputTCPServerRun \" port \" *.* :omrelp:\" target_IP \":\" target_port \"", "USDModLoad imuxsock USDModLoad imrelp USDRuleSet relp *.* \" log_path \" USDInputRELPServerBindRuleset relp USDInputRELPServerRun \" target_port \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-using_RELP
Chapter 3. Deploying JBoss EAP 7 application on OpenShift using Helm chart
Chapter 3. Deploying JBoss EAP 7 application on OpenShift using Helm chart You can deploy and run your Jakarta EE application with JBoss EAP 7 on OpenShift using Helm charts. Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts. A Helm chart is a collection of files that describes the OpenShift Container Platform resources. The following procedures demonstrate how to deploy and run a Jakarta EE application using a Helm chart on the OpenShift Container Platform web console. Important This feature is provided as Technology Preview only. It is not supported for use in a production environment, and it might be subject to significant future changes. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 3.1. Prerequisites You have an OpenShift instance installed and operational. For more information about installing and configuring your OpenShift instance, see OpenShift Container Platform Getting Started guide . You are logged in to the OpenShift Container Platform web console. For more information about using the OpenShift web console, see the OpenShift Container Platform Getting Started guide . Note You can also use the OpenShift Sandbox to deploy and run a JBoss EAP application on OpenShift Container Platform. This is a trial sandbox available for a limited period of time. This documentation uses a sample Jakarta EE application. You can use the same procedure to deploy your own Jakarta EE application. For more information, see https://github.com/jboss-eap-up-and-running/eap7-getting-started 3.2. Creating a JBoss EAP EAP 7 application with Helm You can create your JBoss EAP 7 application using Helm chart on the OpenShift web console. Procedure In the main navigation, click the drop-down menu and select Developer . In the navigation menu, click Add . The Add page opens. In the Add page, click Helm Chart . In the Helm Charts catalog, search for JBoss EAP 7.4 . Click the JBoss EAP 7.4 Helm chart tile. The side panel displays information about the JBoss EAP 7 Helm chart. Click Install Helm Chart . Some form sections are collapsed by default. Click > to expand and view its content. Note No updates are required to these sections to proceed. The details about the Jakarta EE application that you are building and deploying are specified in the build.uri field. Note If you are building a different application, you must change this uri field to point to the Git repository of that application. Click Install to create the JBoss EAP 7 application using the Helm chart. Verification The Helm release is represented by a dashed box that contains the JBoss EAP icon and eap74 text. This content is placed outside the dashed box. The deployment is indicated by a circle inside the dashed box with text D eap74 . Verify that you see an eap74 Helm Release. Verify that you see an eap74 deployment. 3.3. Viewing the Helm release After you have successfully created your JBoss EAP 7 application using Helm chart, you can view all the information related to the Helm release. Prerequisites You have created your JBoss EAP 7 application using Helm chart. See creating a JBoss EAP EAP 7 application with Helm . Procedure In the navigation menu, click Helm . Click eap74 Helm release. The Helm Release details page opens. It shows all the information related to the Helm release that you installed. Click the Resources tab. It lists all the resources created by this Helm release. Verification Verify that you see a Deployed label to the Helm release eap74 . 3.4. Viewing the associated code After you have successfully created your JBoss EAP 7 application using Helm chart, you can view the associated code. Prerequisites You have created your JBoss EAP 7 application using Helm chart. See creating a JBoss EAP EAP 7 application with Helm . Procedure In the navigation menu, and click Topology . In the Topology view the eap74 deployment displays a code icon in the bottom right-hand corner. This icon either represents the Git repository of the associated code, or if the appropriate operators are installed, it will bring up the associated code in your IDE. If the icon shown is CodeReady Workspaces or Eclipse Che, click it to bring up the associated code in your IDE. Otherwise, click it to navigate to the associated Git repository. Verification Verify that you can see the code associated with your application either in your Git repository or in your IDE. 3.5. Viewing the Build status After you have successfully created your JBoss EAP 7 application using Helm chart, you can view the build status. Prerequisites You have created your JBoss EAP 7 application using Helm chart. See creating a JBoss EAP EAP 7 application with Helm . Procedure In the navigation menu, click Topology . In the Topology view, click the D eap74 icon. A side panel opens with detailed information about the application. In the side panel, click the Resources tab. The Builds section shows all the details related to builds of the application. Note The JBoss EAP 7 application is built in two steps: The first build configuration eap74-build-artifacts compiles and packages the Jakarta EE application, and creates a JBoss EAP server. The application is run on this JBoss EAP server. The build may take a few minutes to complete. The build progresses through various states such as Pending , Running , and Complete . The build state is indicated by a relevant message. When the build is complete, a checkmark and the following message is displayed: Build #1 was complete The second build configuration eap74 puts the Jakarta EE deployment and the JBoss EAP server in a runtime image that contains only what is required to run the application. When the second build is complete, a checkmark and the following message are displayed: Build #2 was complete When the first build is complete, the second build starts. Verification Verify the two builds for eap74-build-artifacts and eap74 are complete: The message Build #1 was complete is displayed for the eap74-build-artifacts build configuration. The message Build #2 was complete is displayed for the eap74 build configuration. 3.6. Viewing the pod status After you have successfully created your JBoss EAP 7 application using Helm chart you can view the pod status. Prerequisites You have created your JBoss EAP 7 application using Helm chart. See creating a JBoss EAP EAP 7 application with Helm . Procedure In the navigation menu, and click Topology . In the Topology view, click D eap74 . A side panel opens with detailed information about the application. In the Details tab, hover over the pod to see the pod status in a tooltip. The number of pods is displayed inside the pod circle. The color of the pod circle indicates the pod status: Light blue = Pending , Blue = Not Ready , Dark blue = Running . Note In the Topology view, the dark outer circle of the D eap74 deployment icon also indicates the pod status. Verification Verify that the text inside the pod circle displays 1 pod . Verify that the Pod circle displays 1 Running when you hover over it. 3.7. Running the JBoss EAP 7 application After you have successfully created and built your JBoss EAP 7 application with Helm, you can access it. Prerequisites You have created your JBoss EAP 7 application. See creating a JBoss EAP EAP 7 application with Helm . Procedure In the Topology view , click the external link icon in the top right-hand corner to open the URL and run the application in a separate browser window. Note This action opens the URL on a web browser window. Verification Verify that the application JBoss EAP 7 on Red Hat OpenShift opens in a separate browser window.
[ "build: uri: https://github.com/jboss-eap-up-and-running/eap7-getting-started" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_online/assembly_deploying-jboss-eap-7-application-on-openshift-using-helm-chart_default
Chapter 22. Cron
Chapter 22. Cron Only consumer is supported The Cron component is a generic interface component that allows triggering events at specific time interval specified using the Unix cron syntax (e.g. 0/2 * * * * ? to trigger an event every two seconds). Being an interface component, the Cron component does not contain a default implementation, instead it requires that the users plug the implementation of their choice. The following standard Camel components support the Cron endpoints: Camel-quartz Camel-spring The Cron component is also supported in Camel K , which can use the Kubernetes scheduler to trigger the routes when required by the cron expression. Camel K does not require additional libraries to be plugged when using cron expressions compatible with Kubernetes cron syntax. 22.1. Dependencies When using cron with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency> Additional libraries may be needed in order to plug a specific implementation. 22.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 22.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 22.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 22.3. Component Options The Cron component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cronService (advanced) The id of the CamelCronService to use when multiple implementations are provided. String 22.4. Endpoint Options The Cron endpoint is configured using URI syntax: with the following path and query parameters: 22.4.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the cron trigger. String 22.4.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean schedule (consumer) Required A cron expression that will be used to generate events. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 22.5. Usage The component can be used to trigger events at specified times, as in the following example: from("cron:tab?schedule=0/1+*+*+*+*+?") .setBody().constant("event") .log("USD{body}"); The schedule expression 0/3+10+ * +? can be also written as 0/3 10 * * * ? and triggers an event every three seconds only in the tenth minute of each hour. Parts in the schedule expression means (in order): Seconds (optional) Minutes Hours Day of month Month Day of week Year (optional) Schedule expressions can be made of 5 to 7 parts. When expressions are composed of 6 parts, the first items is the "seconds" part (and year is considered missing). Other valid examples of schedule expressions are: 0/2 * * * ? (5 parts, an event every two minutes) 0 0/2 * * * MON-FRI 2030 (7 parts, an event every two minutes only in year 2030) Routes can also be written using the XML DSL. <route> <from uri="cron:tab?schedule=0/1+*+*+*+*+?"/> <setBody> <constant>event</constant> </setBody> <to uri="log:info"/> </route> 22.6. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.cron.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cron.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cron.cron-service The id of the CamelCronService to use when multiple implementations are provided. String camel.component.cron.enabled Whether to enable auto configuration of the cron component. This is enabled by default. Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency>", "cron:name", "from(\"cron:tab?schedule=0/1+*+*+*+*+?\") .setBody().constant(\"event\") .log(\"USD{body}\");", "<route> <from uri=\"cron:tab?schedule=0/1+*+*+*+*+?\"/> <setBody> <constant>event</constant> </setBody> <to uri=\"log:info\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cron-component-starter
Chapter 15. Known Issues
Chapter 15. Known Issues The sssd-common package is no longer multilib Because of a change in packaging, the sssd-common package is no longer multilib. Consequently, parallel installation of SSSD packages other than sssd-client no longer works due to a dependency conflict. Note that this was never a supported scenario, but the change that might affect upgrades under certain circumstances. To work around this problem, prior to upgrading, uninstall any multilib SSSD packages except for sssd-client . User login override fails trusted adusers group membership resolution If a user login is overriden by using the --login command-line parameter, then the group membership for this user will be incorrect until the user's first login. Group resolution is inconsistent with group overrides If a group GID is overriden, running the id command reports an incorrect GID. To work around this problem, run the getent group command on the overriden group. Wake on WLAN not working with WOWLAN="magic-packet" in ifcfg files Due to a regression, a kernel configuration item was omitted and a sysfs link for wireless LAN devices was not being created. Consequently, initialization scripts were unable to identify wireless LAN devices separately from Ethernet devices. With this update, the configuration item has been restored to the kernel and the proper sysfs links are now created. However, a related error in the ifup-wireless script means that the following workaround is currently required: As the root user, open the /etc/sysconfig/network-scripts/ifup-wireless file and change this: to this: The change is the addition of backquotes around phy_wireless_device USDDEVICE . Save and close the file. abrt is missing a dependency The abrt package released with Red Hat Enterprise Linux 6.7 is missing a dependency on python-argparse. During normal installation, python-argparse is usually included as a dependency in other packages. However, if customers upgrade from an earlier version of Red Hat Enterprise Linux, python-argparse is not installed. When python-argparse is not present, customers see errors like ImportError: No module named argparse when attempting to use the abrt-action-notify and abrt-action-generate-machine-id commands. To work around this issue, install the python-argparse package: For further information, refer to the Solution article: https://access.redhat.com/solutions/1549053 The zipl boot loader requires target information in each section When calling the zipl tool manually from a command line using a section name as a parameter, the tool was previously using the target defined in the default section of the /etc/zipl.conf file. In the current version of zipl the default sections' target is not being used automatically, resulting in an error. To work around the issue, manually edit the /etc/zipl.conf configuration file and copy the line starting with target= from the default section to every section.
[ "if [ -n \"USDWOWLAN\" ] ; then PHYDEVICE=phy_wireless_device USDDEVICE iw phy USDPHYDEVICE wowlan enable USD{WOWLAN} fi", "if [ -n \"USDWOWLAN\" ] ; then PHYDEVICE=`phy_wireless_device USDDEVICE` iw phy USDPHYDEVICE wowlan enable USD{WOWLAN} fi", "install python-argparse" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/known_issues
Chapter 2. FIPS support
Chapter 2. FIPS support Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. To use FIPS with Streams for Apache Kafka, you must have a FIPS-compliant OpenJDK (Open Java Development Kit) installed on your system. If your RHEL system is FIPS-enabled, OpenJDK automatically switches to FIPS mode when running Streams for Apache Kafka. This ensures that Streams for Apache Kafka uses the FIPS-compliant security libraries provided by OpenJDK. Minimum password length When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. Additional resources What are Federal Information Processing Standards (FIPS) 2.1. Installing Streams for Apache Kafka with FIPS mode enabled Enable FIPS mode before you install Streams for Apache Kafka on RHEL. Red Hat recommends installing RHEL with FIPS mode enabled, as opposed to enabling FIPS mode later. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. With RHEL running in FIPS mode, you must ensure that the Streams for Apache Kafka configuration is FIPS-compliant. Additionally, your Java implementation must also be FIPS-compliant. Note Running Streams for Apache Kafka on RHEL in FIPS mode requires a FIPS-compliant JDK. Procedure Install RHEL in FIPS mode. For further information, see the information on security hardening in the RHEL documentation . Proceed with the installation of Streams for Apache Kafka. Configure Streams for Apache Kafka to use FIPS-compliant algorithms and protocols. If used, ensure that the following configuration is compliant: SSL cipher suites and TLS versions must be supported by the JDK framework. SCRAM-SHA-512 passwords must be at least 32 characters long. Important Make sure that your installation environment and Streams for Apache Kafka configuration remains compliant as FIPS requirements change.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-fips-support-str
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_ansible_plug-ins_for_red_hat_developer_hub/providing-feedback
Chapter 4. Viewing change events
Chapter 4. Viewing change events After deploying the Debezium MySQL connector, it starts capturing changes to the inventory database. When the connector starts, it writes events to a set of Apache Kafka topics, each of which represents one of the tables in the MySQL database. The name of each topic begins with the name of the database server, dbserver1 . The connector writes to the following Kafka topics: dbserver1 The schema change topic to which DDL statements that apply to the tables for which changes are being captured are written. dbserver1.inventory.products Receives change event records for the products table in the inventory database. dbserver1.inventory.products_on_hand Receives change event records for the products_on_hand table in the inventory database. dbserver1.inventory.customers Receives change event records for the customers table in the inventory database. dbserver1.inventory.orders Receives change event records for the orders table in the inventory database. The remainder of this tutorial examines the dbserver1.inventory.customers Kafka topic. As you look more closely at the topic, you'll see how it represents different types of change events, and find information about the connector captured each event. The tutorial contains the following sections: Viewing a create event Updating the database and viewing the update event Deleting a record in the database and viewing the delete event Restarting Kafka Connect and changing the database 4.1. Viewing a create event By viewing the dbserver1.inventory.customers topic, you can see how the MySQL connector captured create events in the inventory database. In this case, the create events capture new customers being added to the database. Procedure Open a new terminal and use kafka-console-consumer to consume the dbserver1.inventory.customers topic from the beginning of the topic. This command runs a simple consumer ( kafka-console-consumer.sh ) in the Pod that is running Kafka ( my-cluster-kafka-0 ): USD oc exec -it my-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --from-beginning \ --property print.key=true \ --topic dbserver1.inventory.customers The consumer returns four messages (in JSON format), one for each row in the customers table. Each message contains the event records for the corresponding table row. There are two JSON documents for each event: a key and a value . The key corresponds to the row's primary key, and the value shows the details of the row (the fields that the row contains, the value of each field, and the type of operation that was performed on the row). For the last event, review the details of the key . Here are the details of the key of the last event (formatted for readability): { "schema":{ "type":"struct", "fields":[ { "type":"int32", "optional":false, "field":"id" } ], "optional":false, "name":"dbserver1.inventory.customers.Key" }, "payload":{ "id":1004 } } The event has two parts: a schema and a payload . The schema contains a Kafka Connect schema describing what is in the payload. In this case, the payload is a struct named dbserver1.inventory.customers.Key that is not optional and has one required field ( id of type int32 ). The payload has a single id field, with a value of 1004 . By reviewing the key of the event, you can see that this event applies to the row in the inventory.customers table whose id primary key column had a value of 1004 . Review the details of the same event's value . The event's value shows that the row was created, and describes what it contains (in this case, the id , first_name , last_name , and email of the inserted row). Here are the details of the value of the last event (formatted for readability): { "schema": { "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "dbserver1.inventory.customers.Value", "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "dbserver1.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": true, "field": "version" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "int64", "optional": false, "field": "ts_sec" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "boolean", "optional": true, "field": "snapshot" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "db" }, { "type": "string", "optional": true, "field": "table" } ], "optional": false, "name": "io.debezium.connector.mysql.Source", "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "dbserver1.inventory.customers.Envelope", "version": 1 }, "payload": { "before": null, "after": { "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { "version": "2.3.4.Final", "name": "dbserver1", "server_id": 0, "ts_sec": 0, "gtid": null, "file": "mysql-bin.000003", "pos": 154, "row": 0, "snapshot": true, "thread": null, "db": "inventory", "table": "customers" }, "op": "r", "ts_ms": 1486500577691 } } This portion of the event is much longer, but like the event's key , it also has a schema and a payload . The schema contains a Kafka Connect schema named dbserver1.inventory.customers.Envelope (version 1) that can contain five fields: op A required field that contains a string value describing the type of operation. Values for the MySQL connector are c for create (or insert), u for update, d for delete, and r for read (in the case of a snapshot). before An optional field that, if present, contains the state of the row before the event occurred. The structure will be described by the dbserver1.inventory.customers.Value Kafka Connect schema, which the dbserver1 connector uses for all rows in the inventory.customers table. after An optional field that, if present, contains the state of the row after the event occurred. The structure is described by the same dbserver1.inventory.customers.Value Kafka Connect schema used in before . source A required field that contains a structure describing the source metadata for the event, which in the case of MySQL, contains several fields: the connector name, the name of the binlog file where the event was recorded, the position in that binlog file where the event appeared, the row within the event (if there is more than one), the names of the affected database and table, the MySQL thread ID that made the change, whether this event was part of a snapshot, and, if available, the MySQL server ID, and the timestamp in seconds. ts_ms An optional field that, if present, contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event. Note The JSON representations of the events are much longer than the rows they describe. This is because, with every event key and value, Kafka Connect ships the schema that describes the payload . Over time, this structure may change. However, having the schemas for the key and the value in the event itself makes it much easier for consuming applications to understand the messages, especially as they evolve over time. The Debezium MySQL connector constructs these schemas based upon the structure of the database tables. If you use DDL statements to alter the table definitions in the MySQL databases, the connector reads these DDL statements and updates its Kafka Connect schemas. This is the only way that each event is structured exactly like the table from where it originated at the time the event occurred. However, the Kafka topic containing all of the events for a single table might have events that correspond to each state of the table definition. The JSON converter includes the key and value schemas in every message, so it does produce very verbose events. Compare the event's key and value schemas to the state of the inventory database. In the terminal that is running the MySQL command line client, run the following statement: mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec) This shows that the event records you reviewed match the records in the database. 4.2. Updating the database and viewing the update event Now that you have seen how the Debezium MySQL connector captured the create events in the inventory database, you will now change one of the records and see how the connector captures it. By completing this procedure, you will learn how to find details about what changed in a database commit, and how you can compare change events to determine when the change occurred in relation to other changes. Procedure In the terminal that is running the MySQL command line client, run the following statement: mysql> UPDATE customers SET first_name='Anne Marie' WHERE id=1004; Query OK, 1 row affected (0.05 sec) Rows matched: 1 Changed: 1 Warnings: 0 View the updated customers table: mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne Marie | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec) Switch to the terminal running kafka-console-consumer to see a new fifth event. By changing a record in the customers table, the Debezium MySQL connector generated a new event. You should see two new JSON documents: one for the event's key , and one for the new event's value . Here are the details of the key for the update event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } This key is the same as the key for the events. Here is that new event's value . There are no changes in the schema section, so only the payload section is shown (formatted for readability): { "schema": {...}, "payload": { "before": { 1 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "[email protected]" }, "after": { 2 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { 3 "name": "2.3.4.Final", "name": "dbserver1", "server_id": 223344, "ts_sec": 1486501486, "gtid": null, "file": "mysql-bin.000003", "pos": 364, "row": 0, "snapshot": null, "thread": 3, "db": "inventory", "table": "customers" }, "op": "u", 4 "ts_ms": 1486501486308 5 } } 1 1 1 The before field now has the state of the row with the values before the database commit. 2 2 2 The after field now has the updated state of the row, and the first_name value is now Anne Marie . 3 3 3 The source field structure has many of the same values as before, except that the ts_sec and pos fields have changed (the file might have changed in other circumstances). 4 4 4 The op field value is now u , signifying that this row changed because of an update. 5 5 5 The ts_ms field shows the time stamp for when Debezium processed this event. By viewing the payload section, you can learn several important things about the update event: By comparing the before and after structures, you can determine what actually changed in the affected row because of the commit. By reviewing the source structure, you can find information about MySQL's record of the change (providing traceability). By comparing the payload section of an event to other events in the same topic (or a different topic), you can determine whether the event occurred before, after, or as part of the same MySQL commit as another event. 4.3. Deleting a record in the database and viewing the delete event Now that you have seen how the Debezium MySQL connector captured the create and update events in the inventory database, you will now delete one of the records and see how the connector captures it. By completing this procedure, you will learn how to find details about delete events, and how Kafka uses log compaction to reduce the number of delete events while still enabling consumers to get all of the events. Procedure In the terminal that is running the MySQL command line client, run the following statement: mysql> DELETE FROM customers WHERE id=1004; Query OK, 1 row affected (0.00 sec) Note If the above command fails with a foreign key constraint violation, then you must remove the reference of the customer address from the addresses table using the following statement: mysql> DELETE FROM addresses WHERE customer_id=1004; Switch to the terminal running kafka-console-consumer to see two new events. By deleting a row in the customers table, the Debezium MySQL connector generated two new events. Review the key and value for the first new event. Here are the details of the key for the first new event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } This key is the same as the key in the two events you looked at. Here is the value of the first new event (formatted for readability): { "schema": {...}, "payload": { "before": { 1 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "[email protected]" }, "after": null, 2 "source": { 3 "name": "2.3.4.Final", "name": "dbserver1", "server_id": 223344, "ts_sec": 1486501558, "gtid": null, "file": "mysql-bin.000003", "pos": 725, "row": 0, "snapshot": null, "thread": 3, "db": "inventory", "table": "customers" }, "op": "d", 4 "ts_ms": 1486501558315 5 } } 1 The before field now has the state of the row that was deleted with the database commit. 2 The after field is null because the row no longer exists. 3 The source field structure has many of the same values as before, except the ts_sec and pos fields have changed (the file might have changed in other circumstances). 4 The op field value is now d , signifying that this row was deleted. 5 The ts_ms field shows the time stamp for when Debezium processes this event. Thus, this event provides a consumer with the information that it needs to process the removal of the row. The old values are also provided, because some consumers might require them to properly handle the removal. Review the key and value for the second new event. Here is the key for the second new event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } Once again, this key is exactly the same key as in the three events you looked at. Here is the value of that same event (formatted for readability): { "schema": null, "payload": null } If Kafka is set up to be log compacted , it will remove older messages from the topic if there is at least one message later in the topic with same key. This last event is called a tombstone event, because it has a key and an empty value. This means that Kafka will remove all prior messages with the same key. Even though the prior messages will be removed, the tombstone event means that consumers can still read the topic from the beginning and not miss any events. 4.4. Restarting the Kafka Connect service Now that you have seen how the Debezium MySQL connector captures create, update, and delete events, you will now see how it can capture change events even when it is not running. The Kafka Connect service automatically manages tasks for its registered connectors. Therefore, if it goes offline, when it restarts, it will start any non-running tasks. This means that even if Debezium is not running, it can still report changes in a database. In this procedure, you will stop Kafka Connect, change some data in the database, and then restart Kafka Connect to see the change events. Procedure Stop the Kafka Connect service. Open the configuration for the Kafka Connect deployment: USD oc edit deployment/my-connect-cluster-connect The deployment configuration opens: apiVersion: apps.openshift.io/v1 kind: Deployment metadata: ... spec: replicas: 1 ... Change the spec.replicas value to 0 . Save the configuration. Verify that the Kafka Connect service has stopped. This command shows that the Kafka Connect service is completed, and that no pods are running: USD oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-1-dxcs9 0/1 Completed 0 7h While the Kafka Connect service is down, switch to the terminal running the MySQL client, and add a new record to the database. mysql> INSERT INTO customers VALUES (default, "Sarah", "Thompson", "[email protected]"); Restart the Kafka Connect service. Open the deployment configuration for the Kafka Connect service. USD oc edit deployment/my-connect-cluster-connect The deployment configuration opens: apiVersion: apps.openshift.io/v1 kind: Deployment metadata: ... spec: replicas: 0 ... Change the spec.replicas value to 1 . Save the deployment configuration. Verify that the Kafka Connect service has restarted. This command shows that the Kafka Connect service is running, and that the pod is ready: USD oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-2-q9kkl 1/1 Running 0 74s Switch to the terminal that is running kafka-console-consumer.sh . New events pop up as they arrive. Examine the record that you created when Kafka Connect was offline (formatted for readability): { ... "payload":{ "id":1005 } } { ... "payload":{ "before":null, "after":{ "id":1005, "first_name":"Sarah", "last_name":"Thompson", "email":"[email protected]" }, "source":{ "version":"2.3.4.Final", "connector":"mysql", "name":"dbserver1", "ts_ms":1582581502000, "snapshot":"false", "db":"inventory", "table":"customers", "server_id":223344, "gtid":null, "file":"mysql-bin.000004", "pos":364, "row":0, "thread":5, "query":null }, "op":"c", "ts_ms":1582581502317 } }
[ "oc exec -it my-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --property print.key=true --topic dbserver1.inventory.customers", "{ \"schema\":{ \"type\":\"struct\", \"fields\":[ { \"type\":\"int32\", \"optional\":false, \"field\":\"id\" } ], \"optional\":false, \"name\":\"dbserver1.inventory.customers.Key\" }, \"payload\":{ \"id\":1004 } }", "{ \"schema\": { \"type\": \"struct\", \"fields\": [ { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"dbserver1.inventory.customers.Value\", \"field\": \"before\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"dbserver1.inventory.customers.Value\", \"field\": \"after\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"string\", \"optional\": true, \"field\": \"version\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"name\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"server_id\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"ts_sec\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"gtid\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"file\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"pos\" }, { \"type\": \"int32\", \"optional\": false, \"field\": \"row\" }, { \"type\": \"boolean\", \"optional\": true, \"field\": \"snapshot\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"thread\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"db\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"table\" } ], \"optional\": false, \"name\": \"io.debezium.connector.mysql.Source\", \"field\": \"source\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"op\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ms\" } ], \"optional\": false, \"name\": \"dbserver1.inventory.customers.Envelope\", \"version\": 1 }, \"payload\": { \"before\": null, \"after\": { \"id\": 1004, \"first_name\": \"Anne\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { \"version\": \"2.3.4.Final\", \"name\": \"dbserver1\", \"server_id\": 0, \"ts_sec\": 0, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 154, \"row\": 0, \"snapshot\": true, \"thread\": null, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"r\", \"ts_ms\": 1486500577691 } }", "mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec)", "mysql> UPDATE customers SET first_name='Anne Marie' WHERE id=1004; Query OK, 1 row affected (0.05 sec) Rows matched: 1 Changed: 1 Warnings: 0", "mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne Marie | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec)", "{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }", "{ \"schema\": {...}, \"payload\": { \"before\": { 1 \"id\": 1004, \"first_name\": \"Anne\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"after\": { 2 \"id\": 1004, \"first_name\": \"Anne Marie\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { 3 \"name\": \"2.3.4.Final\", \"name\": \"dbserver1\", \"server_id\": 223344, \"ts_sec\": 1486501486, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 364, \"row\": 0, \"snapshot\": null, \"thread\": 3, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"u\", 4 \"ts_ms\": 1486501486308 5 } }", "mysql> DELETE FROM customers WHERE id=1004; Query OK, 1 row affected (0.00 sec)", "mysql> DELETE FROM addresses WHERE customer_id=1004;", "{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }", "{ \"schema\": {...}, \"payload\": { \"before\": { 1 \"id\": 1004, \"first_name\": \"Anne Marie\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"after\": null, 2 \"source\": { 3 \"name\": \"2.3.4.Final\", \"name\": \"dbserver1\", \"server_id\": 223344, \"ts_sec\": 1486501558, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 725, \"row\": 0, \"snapshot\": null, \"thread\": 3, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"d\", 4 \"ts_ms\": 1486501558315 5 } }", "{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }", "{ \"schema\": null, \"payload\": null }", "oc edit deployment/my-connect-cluster-connect", "apiVersion: apps.openshift.io/v1 kind: Deployment metadata: spec: replicas: 1", "oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-1-dxcs9 0/1 Completed 0 7h", "mysql> INSERT INTO customers VALUES (default, \"Sarah\", \"Thompson\", \"[email protected]\");", "oc edit deployment/my-connect-cluster-connect", "apiVersion: apps.openshift.io/v1 kind: Deployment metadata: spec: replicas: 0", "oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-2-q9kkl 1/1 Running 0 74s", "{ \"payload\":{ \"id\":1005 } } { \"payload\":{ \"before\":null, \"after\":{ \"id\":1005, \"first_name\":\"Sarah\", \"last_name\":\"Thompson\", \"email\":\"[email protected]\" }, \"source\":{ \"version\":\"2.3.4.Final\", \"connector\":\"mysql\", \"name\":\"dbserver1\", \"ts_ms\":1582581502000, \"snapshot\":\"false\", \"db\":\"inventory\", \"table\":\"customers\", \"server_id\":223344, \"gtid\":null, \"file\":\"mysql-bin.000004\", \"pos\":364, \"row\":0, \"thread\":5, \"query\":null }, \"op\":\"c\", \"ts_ms\":1582581502317 } }" ]
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/getting_started_with_debezium/viewing-change-events
Chapter 7. Configuring audit logging policies
Chapter 7. Configuring audit logging policies You can control MicroShift audit log file rotation and retention by using configuration values. 7.1. About setting limits on audit log files Controlling the rotation and retention of the MicroShift audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or cluster workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available. The values you set to limit MicroShift audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization. You can set fields in combination to define a maximum storage limit for retained logs. For example: Set both maxFileSize and maxFiles to create a log storage upper limit. Set a maxFileAge value to automatically delete files older than the timestamp in the file name, regardless of the maxFiles value. 7.1.1. Default audit log values MicroShift includes the following default audit log rotation values: Table 7.1. MicroShift default audit log values Audit log parameter Default setting Definition maxFileAge : 0 How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured. maxFiles : 10 The total number of log files retained. By default, MicroShift retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured. maxFileSize : 200 By default, when the audit.log file reaches the maxFileSize limit, the audit.log file is rotated and MicroShift begins writing to a new audit.log file. This value is in megabytes and can be configured. profile : Default The Default profile setting only logs metadata for read and write requests; request bodies are not logged except for OAuth access token requests. If you do not specify this field, the Default profile is used. The maximum default storage usage for audit log retention is 2000Mb if there are 10 or fewer files. If you do not specify a value for a field, the default value is used. If you remove a previously set field value, the default value is restored after the MicroShift service restart. Important You must configure audit log retention and rotation in Red Hat Enterprise Linux (RHEL) for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the RHEL /var/log/audit/audit.log file to maintain MicroShift cluster health. Additional resources Configuring auditd for a secure environment Understanding Audit log files How to use logrotate utility to rotate log files (Solutions, dated 7 August 2024) 7.2. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server and the Kubernetes API server. MicroShift supports the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch , delete , deletecollection ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] None No requests are logged, including OAuth access token requests and OAuth authorize token requests. Warning Do not disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue to troubleshoot properly. Sensitive resources, such as Secret , Route , and OAuthClient objects, are only logged at the metadata level. By default, MicroShift uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage such as CPU, memory, and I/O. 7.3. Configuring audit log values You can configure audit log settings by using the MicroShift service configuration file. Procedure Make a copy of the provided config.yaml.default file in the /etc/microshift/ directory, renaming it config.yaml . Keep the new MicroShift config.yaml you create in the /etc/microshift/ directory. The new config.yaml is read whenever the MicroShift service starts. After you create it, the config.yaml file takes precedence over built-in settings. Replace the default values in the auditLog section of the YAML with your desired valid values. Example default auditLog configuration apiServer: # .... auditLog: maxFileAge: 7 1 maxFileSize: 200 2 maxFiles: 1 3 profile: Default 4 # .... 1 Specifies the maximum time in days that log files are kept. Files older than this limit are deleted. In this example, after a log file is more than 7 days old, it is deleted. The files are deleted regardless of whether or not the live log has reached the maximum file size specified in the maxFileSize field. File age is determined by the timestamp written in the name of the rotated log file, for example, audit-2024-05-16T17-03-59.994.log . When the value is 0 , the limit is disabled. 2 The maximum audit log file size in megabytes. In this example, the file is rotated as soon as the live log reaches the 200 MB limit. When the value is set to 0 , the limit is disabled. 3 The maximum number of rotated audit log files retained. After the limit is reached, the log files are deleted in order from oldest to newest. In this example, the value 1 results in only 1 file of size maxFileSize being retained in addition to the current active log. When the value is set to 0 , the limit is disabled. 4 Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. If you do not specify this field, the Default profile is used. Optional: To specify a new directory for logs, you can stop MicroShift, and then move the /var/log/kube-apiserver directory to your desired location: Stop MicroShift by running the following command: USD sudo systemctl stop microshift Move the /var/log/kube-apiserver directory to your desired location by running the following command: USD sudo mv /var/log/kube-apiserver <~/kube-apiserver> 1 1 Replace <~/kube-apiserver> with the path to the directory that you want to use. If you specified a new directory for logs, create a symlink to your custom directory at /var/log/kube-apiserver by running the following command: USD sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver 1 1 Replace <~/kube-apiserver> with the path to the directory that you want to use. This enables the collection of logs in sos reports. If you are configuring audit log policies on a running instance, restart MicroShift by entering the following command: USD sudo systemctl restart microshift 7.4. Troubleshooting audit log configuration Use the following steps to troubleshoot custom audit log settings and file locations. Procedure Check the current values that are configured by running the following command: USD sudo microshift show-config --mode effective Example output auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodies Check the audit.log file permissions by running the following command: USD sudo ls -ltrh /var/log/kube-apiserver/audit.log Example output -rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log List the contents of the current log directory by running the following command: USD sudo ls -ltrh /var/log/kube-apiserver/ Example output total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log
[ "apiServer: . auditLog: maxFileAge: 7 1 maxFileSize: 200 2 maxFiles: 1 3 profile: Default 4 .", "sudo systemctl stop microshift", "sudo mv /var/log/kube-apiserver <~/kube-apiserver> 1", "sudo ln -s <~/kube-apiserver> /var/log/kube-apiserver 1", "sudo systemctl restart microshift", "sudo microshift show-config --mode effective", "auditLog: maxFileSize: 200 maxFiles: 1 maxFileAge: 7 profile: AllRequestBodies", "sudo ls -ltrh /var/log/kube-apiserver/audit.log", "-rw-------. 1 root root 46M Mar 12 09:52 /var/log/kube-apiserver/audit.log", "sudo ls -ltrh /var/log/kube-apiserver/", "total 6.0M -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-16.267.log -rw-------. 1 root root 2.0M Mar 12 10:56 audit-2024-03-12T14-56-49.444.log -rw-------. 1 root root 962K Mar 12 10:57 audit.log" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/configuring/microshift-audit-logs-config
function::cpu
function::cpu Name function::cpu - Returns the current cpu number Synopsis Arguments None Description This function returns the current cpu number.
[ "cpu:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cpu
Chapter 5. Enabling Windows container workloads
Chapter 5. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the platform: none field set in your install-config.yaml file. You have configured hybrid networking with OVN-Kubernetes for your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because WMCO installs and manages the runtime, it is recommended that you do not manually install containerd on nodes. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Windows Machine Config Operator prerequisites . 5.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). Note Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. 5.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 5.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 5.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 5.3. Using Windows containers in a proxy-enabled cluster The Windows Machine Config Operator (WMCO) can consume and use a cluster-wide egress proxy configuration when making external requests outside the cluster's internal network. This allows you to add Windows nodes and run workloads in a proxy-enabled cluster, allowing your Windows nodes to pull images from registries that are secured behind your proxy server or to make requests to off-cluster services and services that use a custom public key infrastructure. Note The cluster-wide proxy affects system components only, not user workloads. In proxy-enabled clusters, the WMCO is aware of the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY values that are set for the cluster. The WMCO periodically checks whether the proxy environment variables have changed. If there is a discrepancy, the WMCO reconciles and updates the proxy environment variables on the Windows instances. Windows workloads created on Windows nodes in proxy-enabled clusters do not inherit proxy settings from the node by default, the same as with Linux nodes. Also, by default PowerShell sessions do not inherit proxy settings on Windows nodes in proxy-enabled clusters. Additional resources Configuring the cluster-wide proxy . 5.4. Using Windows containers with a mirror registry The Windows Machine Config Operator (WMCO) can pull images from a registry mirror rather than from a public registry by using an ImageDigestMirrorSet (IDMS) or ImageTagMirrorSet (ITMS) object to configure your cluster to pull images from the mirror registry. A mirror registry has the following benefits: Avoids public registry outages Speeds up node and pod creation Pulls images from behind your organization's firewall A mirror registry can also be used with a OpenShift Container Platform cluster in a disconnected, or air-gapped, network. A disconnected network is a restricted network without direct internet connectivity. Because the cluster does not have access to the internet, any external container images cannot be referenced. Using a mirror registry requires the following general steps: Create the mirror registry, using a tool such as Red Hat Quay. Create a container image registry credentials file. Copy the images from your online image repository to your mirror registry. For information about these steps, see "About disconnected installation mirroring." After creating the mirror registry and mirroring the images, you can use an ImageDigestMirrorSet (IDMS) or ImageTagMirrorSet (ITMS) object to configure your cluster to pull images from the mirror registry without needing to update each of your pod specs. The IDMS and ITMS objects redirect requests to pull images from a repository on a source image registry and have it resolved by the mirror repository instead. If changes are made to the IDMS or ITMS object, the WMCO automatically updates the appropriate hosts.toml file on your Windows nodes with the new information. Note that the WMCO sequentially updates each Windows node when mirror settings are changed. As such, the time required for these updates increases with the number of Windows nodes in the cluster. Also, because Windows nodes configured by the WMCO rely on containerd container runtime, the WMCO ensures that the containerd config files are up-to-date with the registry settings. For new nodes, these files are copied to the instances upon creation. For existing nodes, after activating the mirror registry, the registry controller uses SSH to access each node and copy the generated config files, replacing any existing files. You can use a mirror registry with machine set or Bring-Your-Own-Host (BYOH) Windows nodes. Additional resources About disconnected installation mirroring 5.4.1. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the appropriate hosts.toml containerd configuration file(s) on every Windows node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a data center that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. The Windows Machine Config Operator (WMCO) watches for changes to the IDMS and ITMS resources and generates a set of hosts.toml containerd configuration files, one file for each source registry, with those changes. The WMCO then updates any existing Windows nodes to use the new registry configuration. Note The IDMS and ITMS objects must be created before you can add Windows nodes using a mirrored registry. 5.4.2. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Important Windows images mirrored through ImageDigestMirrorSet and ImageTagMirrorSet objects have specific naming requirements. The final portion of the namespace and the image name of the mirror image must match the image being mirrored. For example, when mirroring the mcr.microsoft.com/oss/kubernetes/pause:3.9 image, the mirror image must have the <mirror_registry>/<optional_namespaces>/oss/kubernetes/pause:3.9 format. The optional_namespaces can be any number of leading repository namespaces. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy --all \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Important You must mirror the mcr.microsoft.com/oss/kubernetes/pause:3.9 image. For example, you could use the following skopeo command to mirror the image: USD skopeo copy \ docker://mcr.microsoft.com/oss/kubernetes/pause:3.9\ docker://example.io/oss/kubernetes/pause:3.9 Log in to your OpenShift Container Platform cluster. Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example2/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com source: registry.redhat.io mirrorSourcePolicy: NeverContactSource - mirrors: - docker.io source: docker-mirror.internal mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use another mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in image pull specifications. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. Create the new object: USD oc create -f registryrepomirror.yaml To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check that the WMCO generated a hosts.toml file for each registry on each Windows instance. For the example IDMS object, there should be three files in the following file structure: USD tree USDconfig_path Example output C:/k/containerd/registries/ |── registry.access.redhat.com | └── hosts.toml |── mirror.example.com | └── hosts.toml └── docker.io └── hosts.toml: The following output represents a hosts.toml containerd configuration file where the example IDMS object was applied. Example host.toml files USD cat "USDconfig_path"/registry.access.redhat.com/host.toml server = "https://registry.access.redhat.com" # default fallback server since "AllowContactingSource" mirrorSourcePolicy is set [host."https://example.io/example/ubi-minimal"] capabilities = ["pull"] [host."https://example.com/example2/ubi-minimal"] # secondary mirror capabilities = ["pull"] USD cat "USDconfig_path"/registry.redhat.io/host.toml # "server" omitted since "NeverContactSource" mirrorSourcePolicy is set [host."https://mirror.example.com"] capabilities = ["pull"] USD cat "USDconfig_path"/docker.io/host.toml server = "https://docker.io" [host."https://docker-mirror.internal"] capabilities = ["pull", "resolve"] # resolve tags Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. 5.5. Rebooting a node gracefully The Windows Machine Config Operator (WMCO) minimizes node reboots whenever possible. However, certain operations and updates require a reboot to ensure that changes are applied correctly and securely. To safely reboot your Windows nodes, use the graceful reboot process. For information on gracefully rebooting a standard OpenShift Container Platform node, see "Rebooting a node gracefully" in the Nodes documentation. Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction SSH into the Windows node and enter PowerShell by running the following command: C:\> powershell Restart the node by running the following command: C:\> Restart-Computer -Force Windows nodes on Amazon Web Services (AWS) do not return to READY state after a graceful reboot due to an inconsistency with the EC2 instance metadata routes and the Host Network Service (HNS) networks. After the reboot, SSH into any Windows node on AWS and add the route by running the following command in a shell prompt: C:\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip> where: 169.254.169.254 Specifies the address of the EC2 instance metadata endpoint. 255.255.255.255 Specifies the network mask of the EC2 instance metadata endpoint. <gateway_ip> Specifies the corresponding IP address of the gateway in the Windows instance, which you can find by running the following command: C:\> ipconfig | findstr /C:"Default Gateway" After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional resources Rebooting a OpenShift Container Platform node gracefully Backing up etcd data 5.6. Additional resources Generating a key pair for cluster node SSH access Adding Operators to a cluster
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f wmco-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator", "oc create -f <file-name>.yaml", "oc create -f wmco-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4", "oc create -f <file-name>.yaml", "oc create -f wmco-sub.yaml", "oc get csv -n openshift-windows-machine-config-operator", "NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded", "oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1", "skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal", "skopeo copy docker://mcr.microsoft.com/oss/kubernetes/pause:3.9 docker://example.io/oss/kubernetes/pause:3.9", "apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example2/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com source: registry.redhat.io mirrorSourcePolicy: NeverContactSource - mirrors: - docker.io source: docker-mirror.internal mirrorSourcePolicy: AllowContactingSource", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "tree USDconfig_path", "C:/k/containerd/registries/ |── registry.access.redhat.com | └── hosts.toml |── mirror.example.com | └── hosts.toml └── docker.io └── hosts.toml:", "cat \"USDconfig_path\"/registry.access.redhat.com/host.toml server = \"https://registry.access.redhat.com\" # default fallback server since \"AllowContactingSource\" mirrorSourcePolicy is set [host.\"https://example.io/example/ubi-minimal\"] capabilities = [\"pull\"] secondary mirror capabilities = [\"pull\"] cat \"USDconfig_path\"/registry.redhat.io/host.toml \"server\" omitted since \"NeverContactSource\" mirrorSourcePolicy is set [host.\"https://mirror.example.com\"] capabilities = [\"pull\"] cat \"USDconfig_path\"/docker.io/host.toml server = \"https://docker.io\" [host.\"https://docker-mirror.internal\"] capabilities = [\"pull\", \"resolve\"] # resolve tags", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "C:\\> powershell", "C:\\> Restart-Computer -Force", "C:\\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip>", "C:\\> ipconfig | findstr /C:\"Default Gateway\"", "oc adm uncordon <node1>", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/enabling-windows-container-workloads
Chapter 17. Optimizing data plane performance with the Intel vRAN Dedicated Accelerator ACC100
Chapter 17. Optimizing data plane performance with the Intel vRAN Dedicated Accelerator ACC100 17.1. Understanding the vRAN Dedicated Accelerator ACC100 Hardware accelerator cards from Intel accelerate 4G/LTE and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform. The vRAN Dedicated Accelerator ACC100, based on Intel eASIC technology is designed to offload and accelerate the computing-intensive process of forward error correction (FEC) for 4G/LTE and 5G technology, freeing up processing power. Intel eASIC devices are structured ASICs, an intermediate technology between FPGAs and standard application-specific integrated circuits (ASICs). Intel vRAN Dedicated Accelerator ACC100 support on OpenShift Container Platform uses one Operator: OpenNESS Operator for Wireless FEC Accelerators 17.2. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators The role of the OpenNESS Operator for Intel Wireless forward error correction (FEC) Accelerator is to orchestrate and manage the devices exposed by a range of Intel vRAN FEC acceleration hardware within the OpenShift Container Platform cluster. One of the most compute-intensive 4G/LTE and 5G workloads is RAN layer 1 (L1) FEC. FEC resolves data transmission errors over unreliable or noisy communication channels. FEC technology detects and corrects a limited number of errors in 4G/LTE or 5G data without the need for retransmission. The FEC device provided by the Intel vRAN Dedicated Accelerator ACC100 supports the vRAN use case. The OpenNESS SR-IOV Operator for Wireless FEC Accelerators provides functionality to create virtual functions (VFs) for the FEC device, binds them to appropriate drivers, and configures the VFs queues for functionality in 4G/LTE or 5G deployment. As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform CLI or the web console. 17.2.1. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the CLI As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the CLI. Prerequisites A cluster installed on bare-metal hardware. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by completing the following actions: Define the vran-acceleration-operators namespace by creating a file named sriov-namespace.yaml as shown in the following example: apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f sriov-namespace.yaml Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators in the namespace you created in the step by creating the following objects: Create the following OperatorGroup custom resource (CR) and save the YAML in the sriov-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators spec: targetNamespaces: - vran-acceleration-operators Create the OperatorGroup CR by running the following command: USD oc create -f sriov-operatorgroup.yaml Run the following command to get the channel value required for the step. USD oc get packagemanifest sriov-fec -n openshift-marketplace -o jsonpath='{.status.defaultChannel}' Example output stable Create the following Subscription CR and save the YAML in the sriov-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators spec: channel: "<channel>" 1 name: sriov-fec source: certified-operators 2 sourceNamespace: openshift-marketplace 1 Specify the value for channel from the value obtained in the step for the .status.defaultChannel parameter. 2 You must specify the certified-operators value. Create the Subscription CR by running the following command: USD oc create -f sriov-sub.yaml Verification Verify that the Operator is installed: USD oc get csv -n vran-acceleration-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-fec.v1.1.0 Succeeded 17.2.2. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the web console As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the web console. Note You must create the Namespace and OperatorGroup custom resource (CR) as mentioned in the section. Procedure Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose OpenNESS SR-IOV Operator for Wireless FEC Accelerators from the list of available Operators, and then click Install . On the Install Operator page, select All namespaces on the cluster . Then, click Install . Optional: Verify that the SRIOV-FEC Operator is installed successfully: Switch to the Operators Installed Operators page. Ensure that OpenNESS SR-IOV Operator for Wireless FEC Accelerators is listed in the vran-acceleration-operators project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the console does not indicate that the Operator is installed, perform the following troubleshooting steps: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the vran-acceleration-operators project. 17.2.3. Configuring the SR-IOV FEC Operator for the Intel(R) vRAN Dedicated Accelerator ACC100 Programming the Intel vRAN Dedicated Accelerator ACC100 exposes the Single Root I/O Virtualization (SRIOV) virtual function (VF) devices that are then used to accelerate the forward error correction (FEC) in the vRAN workload. The Intel vRAN Dedicated Accelerator ACC100 accelerates 4G and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform. This device is also known as Mount Bryce. The SR-IOV-FEC Operator handles the management of the FEC devices that are used to accelerate the FEC process in vRAN L1 applications. Configuring the SR-IOV-FEC Operator involves: Creating the virtual functions (VFs) for the FEC device Binding the VFs to the appropriate drivers Configuring the VF queues for desired functionality in a 4G or 5G deployment The role of forward error correction (FEC) is to correct transmission errors, where certain bits in a message can be lost or garbled. Messages can be lost or garbled due to noise in the transmission media, interference, or low signal strength. Without FEC, a garbled message would have to be resent, adding to the network load and impacting throughput and latency. Prerequisites Intel FPGA ACC100 5G/4G card. Node or nodes installed with the OpenNESS Operator for Wireless FEC Accelerators. Enable global SR-IOV and VT-d settings in the BIOS for the node. RT kernel configured with Performance Addon Operator. Log in as a user with cluster-admin privileges. Procedure Change to the vran-acceleration-operators project: USD oc project vran-acceleration-operators Verify that the SR-IOV-FEC Operator is installed: USD oc get csv -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-fec.v1.1.0 Succeeded Verify that the sriov-fec pods are running: USD oc get pods Example output NAME READY STATUS RESTARTS AGE sriov-device-plugin-j5jlv 1/1 Running 1 15d sriov-fec-controller-manager-85b6b8f4d4-gd2qg 1/1 Running 1 15d sriov-fec-daemonset-kqqs6 1/1 Running 1 15d sriov-device-plugin expose the FEC virtual functions as resources under the node sriov-fec-controller-manager applies CR to the node and maintains the operands containers sriov-fec-daemonset is responsible for: Discovering the SRIOV NICs on each node. Syncing the status of the custom resource (CR) defined in step 6. Taking the spec of the CR as input and configuring the discovered NICs. Retrieve all the nodes containing one of the supported vRAN FEC accelerator devices: USD oc get sriovfecnodeconfig Example output NAME CONFIGURED node1 Succeeded Find the physical function (PF) of the SR-IOV FEC accelerator device to configure: USD oc get sriovfecnodeconfig node1 -o yaml Example output status: conditions: - lastTransitionTime: "2021-03-19T17:19:37Z" message: Configured successfully observedGeneration: 1 reason: ConfigurationSucceeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: "" maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 1 vendorID: "8086" virtualFunctions: [] 2 1 This field indicates the PCI address of the card. 2 This field shows that the virtual functions are empty. Configure the number of virtual functions and queue groups on the FEC device: Create the following custom resource (CR) and save the YAML in the sriovfec_acc100cr.yaml file: Note This example configures the ACC100 8/8 queue groups for 5G, 4 queue groups for Uplink, and another 4 queue groups for Downlink. apiVersion: sriovfec.intel.com/v1 kind: SriovFecClusterConfig metadata: name: config 1 spec: nodes: - nodeName: node1 2 physicalFunctions: - pciAddress: 0000:af:00.0 3 pfDriver: "pci-pf-stub" vfDriver: "vfio-pci" vfAmount: 16 4 bbDevConfig: acc100: # Programming mode: 0 = VF Programming, 1 = PF Programming pfMode: false numVfBundles: 16 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 1 Specify a name for the CR object. The only name that can be specified is config . 2 Specify the node name. 3 Specify the PCI address of the card on which the SR-IOV-FEC Operator will be installed. 4 Specify the number of virtual functions to create. For the Intel vRAN Dedicated Accelerator ACC100, create all 16 VFs. Note The card is configured to provide up to 8 queue groups with up to 16 queues per group. The queue groups can be divided between groups allocated to 5G and 4G and Uplink and Downlink. The Intel vRAN Dedicated Accelerator ACC100 can be configured for: 4G or 5G only 4G and 5G at the same time Each configured VF has access to all the queues. Each of the queue groups have a distinct priority level. The request for a given queue group is made from the application level that is, the vRAN application leveraging the FEC device. Apply the CR: USD oc apply -f sriovfec_acc100cr.yaml After applying the CR, the SR-IOV FEC daemon starts configuring the FEC device. Verification Check the status: USD oc get sriovfecclusterconfig config -o yaml Example output status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4 Check the logs: Determine the pod name of the SR-IOV daemon: USD oc get po -o wide | grep sriov-fec-daemonset | grep node1 Example output sriov-fec-daemonset-kqqs6 1/1 Running 0 19h View the logs: USD oc logs sriov-fec-daemonset-kqqs6 Example output {"level":"Level(-2)","ts":1616794345.4786215,"logger":"daemon.drainhelper.cordonAndDrain()","msg":"node drained"} {"level":"Level(-4)","ts":1616794345.4786265,"logger":"daemon.drainhelper.Run()","msg":"worker function - start"} {"level":"Level(-4)","ts":1616794345.5762916,"logger":"daemon.NodeConfigurator.applyConfig","msg":"current node status","inventory":{"sriovAccelerat ors":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086" ,"deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"","maxVirtualFunctions":16,"virtualFunctions":[]}]}} {"level":"Level(-4)","ts":1616794345.5763638,"logger":"daemon.NodeConfigurator.applyConfig","msg":"configuring PF","requestedConfig":{"pciAddress":" 0000:af:00.0","pfDriver":"pci-pf-stub","vfDriver":"vfio-pci","vfAmount":2,"bbDevConfig":{"acc100":{"pfMode":false,"numVfBundles":16,"maxQueueSize":1 024,"uplink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"uplink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4}}}}} {"level":"Level(-4)","ts":1616794345.5774765,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe pci-pf-stub"} {"level":"Level(-4)","ts":1616794345.5842702,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-4)","ts":1616794345.5843055,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe vfio-pci"} {"level":"Level(-4)","ts":1616794345.6090655,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-2)","ts":1616794345.6091156,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:af:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.6091807,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/pci-pf-stub/bind"} {"level":"Level(-2)","ts":1616794345.7488534,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.748938,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-2)","ts":1616794345.7492096,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.1/driver_override"} {"level":"Level(-2)","ts":1616794345.7492566,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-4)","ts":1616794345.74968,"logger":"daemon.NodeConfigurator.applyConfig","msg":"executing command","cmd":"/sriov_workdir/pf_bb_config ACC100 -c /sriov_artifacts/0000:af:00.0.ini -p 0000:af:00.0"} {"level":"Level(-4)","ts":1616794346.5203931,"logger":"daemon.NodeConfigurator.applyConfig","msg":"commands output","output":"Queue Groups: 0 5GUL, 0 5GDL, 4 4GUL, 4 4GDL\nNumber of 5GUL engines 8\nConfiguration in VF mode\nPF ACC100 configuration complete\nACC100 PF [0000:af:00.0] configuration complete!\n\n"} {"level":"Level(-4)","ts":1616794346.520459,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND"} {"level":"Level(-4)","ts":1616794346.5458736,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 = 0142\n"} {"level":"Level(-4)","ts":1616794346.5459251,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND=0146"} {"level":"Level(-4)","ts":1616794346.5795262,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-2)","ts":1616794346.5795407,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"MasterBus set","pci":"0000:af:00.0","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-4)","ts":1616794346.6867144,"logger":"daemon.drainhelper.Run()","msg":"worker function - end","performUncordon":true} {"level":"Level(-4)","ts":1616794346.6867719,"logger":"daemon.drainhelper.Run()","msg":"uncordoning node"} {"level":"Level(-4)","ts":1616794346.6896322,"logger":"daemon.drainhelper.uncordon()","msg":"starting uncordon attempts"} {"level":"Level(-2)","ts":1616794346.69735,"logger":"daemon.drainhelper.uncordon()","msg":"node uncordoned"} {"level":"Level(-4)","ts":1616794346.6973662,"logger":"daemon.drainhelper.Run()","msg":"cancelling the context to finish the leadership"} {"level":"Level(-4)","ts":1616794346.7029872,"logger":"daemon.drainhelper.Run()","msg":"stopped leading"} {"level":"Level(-4)","ts":1616794346.7030034,"logger":"daemon.drainhelper","msg":"releasing the lock (bug mitigation)"} {"level":"Level(-4)","ts":1616794346.8040674,"logger":"daemon.updateInventory","msg":"obtained inventory","inv":{"sriovAccelerators":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086","deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"pci-pf-stub","maxVirtualFunctions":16,"virtualFunctions":[{"pciAddress":"0000:b0:00.0","driver":"vfio-pci","deviceID":"0d5d"},{"pciAddress":"0000:b0:00.1","driver":"vfio-pci","deviceID":"0d5d"}]}]}} {"level":"Level(-4)","ts":1616794346.9058325,"logger":"daemon","msg":"Update ignored, generation unchanged"} {"level":"Level(-2)","ts":1616794346.9065044,"logger":"daemon.Reconcile","msg":"Reconciled","namespace":"vran-acceleration-operators","name":"pg-itengdvs02r.altera.com"} Check the FEC configuration of the card: USD oc get sriovfecnodeconfig node1 -o yaml Example output status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c 1 driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d 2 driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4 1 The value 0d5c is the deviceID physical function of the FEC device. 2 The value 0d5d is the deviceID virtual function of the FEC device. 17.2.4. Verifying application pod access and ACC100 usage on OpenNESS OpenNESS is an edge computing software toolkit that you can use to onboard and manage applications and network functions on any type of network. To verify all OpenNESS features are working together, including SR-IOV binding, the device plugin, Wireless Base Band Device (bbdev) configuration, and SR-IOV (FEC) VF functionality inside a non-root pod, you can build an image and run a simple validation application for the device. For more information, go to openess.org . Prerequisites Node or nodes installed with the OpenNESS SR-IOV Operator for Wireless FEC Accelerators. Real-Time kernel and huge pages configured with the Performance Addon Operator. Procedure Create a namespace for the test by completing the following actions: Define the test-bbdev namespace by creating a file named test-bbdev-namespace.yaml file as shown in the following example: apiVersion: v1 kind: Namespace metadata: name: test-bbdev labels: openshift.io/run-level: "1" Create the namespace by running the following command: USD oc create -f test-bbdev-namespace.yaml Create the following Pod specification, and then save the YAML in the pod-test.yaml file: apiVersion: v1 kind: Pod metadata: name: pod-bbdev-sample-app namespace: test-bbdev 1 spec: containers: - securityContext: privileged: false capabilities: add: - IPC_LOCK - SYS_NICE name: bbdev-sample-app image: bbdev-sample-app:1.0 2 command: [ "sudo", "/bin/bash", "-c", "--" ] runAsUser: 0 3 resources: requests: hugepages-1Gi: 4Gi 4 memory: 1Gi cpu: "4" 5 intel.com/intel_fec_acc100: '1' 6 limits: memory: 4Gi cpu: "4" hugepages-1Gi: 4Gi intel.com/intel_fec_acc100: '1' 1 Specify the namespace you created in step 1. 2 This defines the test image containing the compiled DPDK. 3 Make the container execute internally as the root user. 4 Specify hugepage size hugepages-1Gi and the quantity of hugepages that will be allocated to the pod. Hugepages and isolated CPUs need to be configured using the Performance Addon Operator. 5 Specify the number of CPUs. 6 Testing of the ACC100 5G FEC configuration is supported by intel.com/intel_fec_acc100 . Create the pod: USD oc apply -f pod-test.yaml Check that the pod is created: USD oc get pods -n test-bbdev Example output NAME READY STATUS RESTARTS AGE pod-bbdev-sample-app 1/1 Running 0 80s Use a remote shell to log in to the pod-bbdev-sample-app : USD oc rsh pod-bbdev-sample-app Example output sh-4.4# Print the VF allocated to the pod: sh-4.4# printenv | grep INTEL_FEC Example output PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0.0.0.0:1d.00.0 1 1 This is the PCI address of the virtual function. Change to the test-bbdev directory. sh-4.4# cd test/test-bbdev/ Check the CPUs that are assigned to the pod: sh-4.4# export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) sh-4.4# echo USD{CPU} This prints out the CPUs that are assigned to the fec.pod . Example output 24,25,64,65 Run the test-bbdev application to test the device: sh-4.4# ./test-bbdev.py -e="-l USD{CPU} -a USD{PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}" -c validation \ -n 64 -b 32 -l 1 -v ./test_vectors/*" Example output Executing: ../../build/app/dpdk-test-bbdev -l 24-25,64-65 0000:1d.00.0 -- -n 64 -l 1 -c validation -v ./test_vectors/bbdev_null.data -b 32 EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes Option -w, --pci-whitelist is deprecated, use -a, --allow option instead EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: intel_fpga_5ngr_fec_vf (8086:d90) device: 0000:1d.00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created =========================================================== Starting Test Suite : BBdev Validation Tests Test vector file = ldpc_dec_v7813.data Device 0 queue 16 setup failed Allocated all queues (id=16) at prio0 on dev0 Device 0 queue 32 setup failed Allocated all queues (id=32) at prio1 on dev0 Device 0 queue 48 setup failed Allocated all queues (id=48) at prio2 on dev0 Device 0 queue 64 setup failed Allocated all queues (id=64) at prio3 on dev0 Device 0 queue 64 setup failed All queues on dev 0 allocated: 64 + ------------------------------------------------------- + == test: validation dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC Operation latency: avg: 23092 cycles, 10.0838 us min: 23092 cycles, 10.0838 us max: 23092 cycles, 10.0838 us TestCase [ 0] : validation_tc passed + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Test Suite Summary : BBdev Validation Tests + Tests Total : 1 + Tests Skipped : 0 + Tests Passed : 1 1 + Tests Failed : 0 + Tests Lasted : 177.67 ms + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + 1 While some tests can be skipped, be sure that the vector tests pass. 17.3. Additional resources OpenNESS Operator for Wireless FEC Accelerators
[ "apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f sriov-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators spec: targetNamespaces: - vran-acceleration-operators", "oc create -f sriov-operatorgroup.yaml", "oc get packagemanifest sriov-fec -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'", "stable", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators spec: channel: \"<channel>\" 1 name: sriov-fec source: certified-operators 2 sourceNamespace: openshift-marketplace", "oc create -f sriov-sub.yaml", "oc get csv -n vran-acceleration-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-fec.v1.1.0 Succeeded", "oc project vran-acceleration-operators", "oc get csv -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-fec.v1.1.0 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE sriov-device-plugin-j5jlv 1/1 Running 1 15d sriov-fec-controller-manager-85b6b8f4d4-gd2qg 1/1 Running 1 15d sriov-fec-daemonset-kqqs6 1/1 Running 1 15d", "oc get sriovfecnodeconfig", "NAME CONFIGURED node1 Succeeded", "oc get sriovfecnodeconfig node1 -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T17:19:37Z\" message: Configured successfully observedGeneration: 1 reason: ConfigurationSucceeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: \"\" maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 1 vendorID: \"8086\" virtualFunctions: [] 2", "apiVersion: sriovfec.intel.com/v1 kind: SriovFecClusterConfig metadata: name: config 1 spec: nodes: - nodeName: node1 2 physicalFunctions: - pciAddress: 0000:af:00.0 3 pfDriver: \"pci-pf-stub\" vfDriver: \"vfio-pci\" vfAmount: 16 4 bbDevConfig: acc100: # Programming mode: 0 = VF Programming, 1 = PF Programming pfMode: false numVfBundles: 16 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4", "oc apply -f sriovfec_acc100cr.yaml", "oc get sriovfecclusterconfig config -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T11:46:22Z\" message: Configured successfully observedGeneration: 1 reason: Succeeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: \"8086\" virtualFunctions: - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4", "oc get po -o wide | grep sriov-fec-daemonset | grep node1", "sriov-fec-daemonset-kqqs6 1/1 Running 0 19h", "oc logs sriov-fec-daemonset-kqqs6", "{\"level\":\"Level(-2)\",\"ts\":1616794345.4786215,\"logger\":\"daemon.drainhelper.cordonAndDrain()\",\"msg\":\"node drained\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.4786265,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"worker function - start\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5762916,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"current node status\",\"inventory\":{\"sriovAccelerat ors\":[{\"vendorID\":\"8086\",\"deviceID\":\"0b32\",\"pciAddress\":\"0000:20:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":1,\"virtualFunctions\":[]},{\"vendorID\":\"8086\" ,\"deviceID\":\"0d5c\",\"pciAddress\":\"0000:af:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":16,\"virtualFunctions\":[]}]}} {\"level\":\"Level(-4)\",\"ts\":1616794345.5763638,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"configuring PF\",\"requestedConfig\":{\"pciAddress\":\" 0000:af:00.0\",\"pfDriver\":\"pci-pf-stub\",\"vfDriver\":\"vfio-pci\",\"vfAmount\":2,\"bbDevConfig\":{\"acc100\":{\"pfMode\":false,\"numVfBundles\":16,\"maxQueueSize\":1 024,\"uplink4G\":{\"numQueueGroups\":4,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"downlink4G\":{\"numQueueGroups\":4,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"uplink5G\":{\"numQueueGroups\":0,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"downlink5G\":{\"numQueueGroups\":0,\"numAqsPerGroups\":16,\"aqDepthLog2\":4}}}}} {\"level\":\"Level(-4)\",\"ts\":1616794345.5774765,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ modprobe pci-pf-stub\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5842702,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"commands output\",\"output\":\"\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5843055,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ modprobe vfio-pci\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.6090655,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"commands output\",\"output\":\"\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.6091156,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:af:00.0/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.6091807,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/pci-pf-stub/bind\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7488534,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:b0:00.0/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.748938,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/vfio-pci/bind\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7492096,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:b0:00.1/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7492566,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/vfio-pci/bind\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.74968,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"executing command\",\"cmd\":\"/sriov_workdir/pf_bb_config ACC100 -c /sriov_artifacts/0000:af:00.0.ini -p 0000:af:00.0\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5203931,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"commands output\",\"output\":\"Queue Groups: 0 5GUL, 0 5GDL, 4 4GUL, 4 4GDL\\nNumber of 5GUL engines 8\\nConfiguration in VF mode\\nPF ACC100 configuration complete\\nACC100 PF [0000:af:00.0] configuration complete!\\n\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.520459,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5458736,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"commands output\",\"output\":\"0000:af:00.0 @04 = 0142\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5459251,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND=0146\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5795262,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"commands output\",\"output\":\"0000:af:00.0 @04 0146\\n\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.5795407,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"MasterBus set\",\"pci\":\"0000:af:00.0\",\"output\":\"0000:af:00.0 @04 0146\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6867144,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"worker function - end\",\"performUncordon\":true} {\"level\":\"Level(-4)\",\"ts\":1616794346.6867719,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"uncordoning node\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6896322,\"logger\":\"daemon.drainhelper.uncordon()\",\"msg\":\"starting uncordon attempts\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.69735,\"logger\":\"daemon.drainhelper.uncordon()\",\"msg\":\"node uncordoned\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6973662,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"cancelling the context to finish the leadership\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.7029872,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"stopped leading\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.7030034,\"logger\":\"daemon.drainhelper\",\"msg\":\"releasing the lock (bug mitigation)\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.8040674,\"logger\":\"daemon.updateInventory\",\"msg\":\"obtained inventory\",\"inv\":{\"sriovAccelerators\":[{\"vendorID\":\"8086\",\"deviceID\":\"0b32\",\"pciAddress\":\"0000:20:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":1,\"virtualFunctions\":[]},{\"vendorID\":\"8086\",\"deviceID\":\"0d5c\",\"pciAddress\":\"0000:af:00.0\",\"driver\":\"pci-pf-stub\",\"maxVirtualFunctions\":16,\"virtualFunctions\":[{\"pciAddress\":\"0000:b0:00.0\",\"driver\":\"vfio-pci\",\"deviceID\":\"0d5d\"},{\"pciAddress\":\"0000:b0:00.1\",\"driver\":\"vfio-pci\",\"deviceID\":\"0d5d\"}]}]}} {\"level\":\"Level(-4)\",\"ts\":1616794346.9058325,\"logger\":\"daemon\",\"msg\":\"Update ignored, generation unchanged\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.9065044,\"logger\":\"daemon.Reconcile\",\"msg\":\"Reconciled\",\"namespace\":\"vran-acceleration-operators\",\"name\":\"pg-itengdvs02r.altera.com\"}", "oc get sriovfecnodeconfig node1 -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T11:46:22Z\" message: Configured successfully observedGeneration: 1 reason: Succeeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c 1 driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: \"8086\" virtualFunctions: - deviceID: 0d5d 2 driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4", "apiVersion: v1 kind: Namespace metadata: name: test-bbdev labels: openshift.io/run-level: \"1\"", "oc create -f test-bbdev-namespace.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-bbdev-sample-app namespace: test-bbdev 1 spec: containers: - securityContext: privileged: false capabilities: add: - IPC_LOCK - SYS_NICE name: bbdev-sample-app image: bbdev-sample-app:1.0 2 command: [ \"sudo\", \"/bin/bash\", \"-c\", \"--\" ] runAsUser: 0 3 resources: requests: hugepages-1Gi: 4Gi 4 memory: 1Gi cpu: \"4\" 5 intel.com/intel_fec_acc100: '1' 6 limits: memory: 4Gi cpu: \"4\" hugepages-1Gi: 4Gi intel.com/intel_fec_acc100: '1'", "oc apply -f pod-test.yaml", "oc get pods -n test-bbdev", "NAME READY STATUS RESTARTS AGE pod-bbdev-sample-app 1/1 Running 0 80s", "oc rsh pod-bbdev-sample-app", "sh-4.4#", "sh-4.4# printenv | grep INTEL_FEC", "PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0.0.0.0:1d.00.0 1", "sh-4.4# cd test/test-bbdev/", "sh-4.4# export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) sh-4.4# echo USD{CPU}", "24,25,64,65", "sh-4.4# ./test-bbdev.py -e=\"-l USD{CPU} -a USD{PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}\" -c validation \\ -n 64 -b 32 -l 1 -v ./test_vectors/*\"", "Executing: ../../build/app/dpdk-test-bbdev -l 24-25,64-65 0000:1d.00.0 -- -n 64 -l 1 -c validation -v ./test_vectors/bbdev_null.data -b 32 EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes Option -w, --pci-whitelist is deprecated, use -a, --allow option instead EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support EAL: VFIO support initialized EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: intel_fpga_5ngr_fec_vf (8086:d90) device: 0000:1d.00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created =========================================================== Starting Test Suite : BBdev Validation Tests Test vector file = ldpc_dec_v7813.data Device 0 queue 16 setup failed Allocated all queues (id=16) at prio0 on dev0 Device 0 queue 32 setup failed Allocated all queues (id=32) at prio1 on dev0 Device 0 queue 48 setup failed Allocated all queues (id=48) at prio2 on dev0 Device 0 queue 64 setup failed Allocated all queues (id=64) at prio3 on dev0 Device 0 queue 64 setup failed All queues on dev 0 allocated: 64 + ------------------------------------------------------- + == test: validation dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC Operation latency: avg: 23092 cycles, 10.0838 us min: 23092 cycles, 10.0838 us max: 23092 cycles, 10.0838 us TestCase [ 0] : validation_tc passed + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Test Suite Summary : BBdev Validation Tests + Tests Total : 1 + Tests Skipped : 0 + Tests Passed : 1 1 + Tests Failed : 0 + Tests Lasted : 177.67 ms + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/cnf-optimize-data-performance-with-acc100
Chapter 3. ClusterRole [authorization.openshift.io/v1]
Chapter 3. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 3.1. Specification Property Type Description aggregationRule AggregationRule AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/clusterroles GET : list objects of kind ClusterRole POST : create a ClusterRole /apis/authorization.openshift.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole 3.2.1. /apis/authorization.openshift.io/v1/clusterroles HTTP method GET Description list objects of kind ClusterRole Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ClusterRole schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/authorization.openshift.io/v1/clusterroles/{name} Table 3.5. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. Body parameters Parameter Type Description body ClusterRole schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/role_apis/clusterrole-authorization-openshift-io-v1
Appendix A. Managing Certificates
Appendix A. Managing Certificates Abstract TLS authentication uses X.509 certificates-a common, secure and reliable method of authenticating your application objects. You can create X.509 certificates that identify your Red Hat Fuse applications. A.1. What is an X.509 Certificate? Role of certificates An X.509 certificate binds a name to a public key value. The role of the certificate is to associate a public key with the identity contained in the X.509 certificate. Integrity of the public key Authentication of a secure application depends on the integrity of the public key value in the application's certificate. If an impostor replaces the public key with its own public key, it can impersonate the true application and gain access to secure data. To prevent this type of attack, all certificates must be signed by a certification authority (CA). A CA is a trusted node that confirms the integrity of the public key value in a certificate. Digital signatures A CA signs a certificate by adding its digital signature to the certificate. A digital signature is a message encoded with the CA's private key. The CA's public key is made available to applications by distributing a certificate for the CA. Applications verify that certificates are validly signed by decoding the CA's digital signature with the CA's public key. Warning The supplied demonstration certificates are self-signed certificates. These certificates are insecure because anyone can access their private key. To secure your system, you must create new certificates signed by a trusted CA. Contents of an X.509 certificate An X.509 certificate contains information about the certificate subject and the certificate issuer (the CA that issued the certificate). A certificate is encoded in Abstract Syntax Notation One (ASN.1), a standard syntax for describing messages that can be sent or received on a network. The role of a certificate is to associate an identity with a public key value. In more detail, a certificate includes: A subject distinguished name (DN) that identifies the certificate owner. The public key associated with the subject. X.509 version information. A serial number that uniquely identifies the certificate. An issuer DN that identifies the CA that issued the certificate. The digital signature of the issuer. Information about the algorithm used to sign the certificate. Some optional X.509 v.3 extensions; for example, an extension exists that distinguishes between CA certificates and end-entity certificates. Distinguished names A DN is a general purpose X.500 identifier that is often used in the context of security. See Appendix B, ASN.1 and Distinguished Names for more details about DNs. A.2. Certification Authorities A.2.1. Introduction to Certificate Authorities A CA consists of a set of tools for generating and managing certificates and a database that contains all of the generated certificates. When setting up a system, it is important to choose a suitable CA that is sufficiently secure for your requirements. There are two types of CA you can use: commercial CAs are companies that sign certificates for many systems. private CAs are trusted nodes that you set up and use to sign certificates for your system only. A.2.2. Commercial Certification Authorities Signing certificates There are several commercial CAs available. The mechanism for signing a certificate using a commercial CA depends on which CA you choose. Advantages of commercial CAs An advantage of commercial CAs is that they are often trusted by a large number of people. If your applications are designed to be available to systems external to your organization, use a commercial CA to sign your certificates. If your applications are for use within an internal network, a private CA might be appropriate. Criteria for choosing a CA Before choosing a commercial CA, consider the following criteria: What are the certificate-signing policies of the commercial CAs? Are your applications designed to be available on an internal network only? What are the potential costs of setting up a private CA compared to the costs of subscribing to a commercial CA? A.2.3. Private Certification Authorities Choosing a CA software package If you want to take responsibility for signing certificates for your system, set up a private CA. To set up a private CA, you require access to a software package that provides utilities for creating and signing certificates. Several packages of this type are available. OpenSSL software package One software package that allows you to set up a private CA is OpenSSL, http://www.openssl.org . The OpenSSL package includes basic command line utilities for generating and signing certificates. Complete documentation for the OpenSSL command line utilities is available at http://www.openssl.org/docs . Setting up a private CA using OpenSSL To set up a private CA, see the instructions in Section A.5, "Creating Your Own Certificates" . Choosing a host for a private certification authority Choosing a host is an important step in setting up a private CA. The level of security associated with the CA host determines the level of trust associated with certificates signed by the CA. If you are setting up a CA for use in the development and testing of Red Hat Fuse applications, use any host that the application developers can access. However, when you create the CA certificate and private key, do not make the CA private key available on any hosts where security-critical applications run. Security precautions If you are setting up a CA to sign certificates for applications that you are going to deploy, make the CA host as secure as possible. For example, take the following precautions to secure your CA: Do not connect the CA to a network. Restrict all access to the CA to a limited set of trusted users. Use an RF-shield to protect the CA from radio-frequency surveillance. A.3. Certificate Chaining Certificate chain A certificate chain is a sequence of certificates, where each certificate in the chain is signed by the subsequent certificate. Figure A.1, "A Certificate Chain of Depth 2" shows an example of a simple certificate chain. Figure A.1. A Certificate Chain of Depth 2 Self-signed certificate The last certificate in the chain is normally a self-signed certificate -a certificate that signs itself. Chain of trust The purpose of a certificate chain is to establish a chain of trust from a peer certificate to a trusted CA certificate. The CA vouches for the identity in the peer certificate by signing it. If the CA is one that you trust (indicated by the presence of a copy of the CA certificate in your root certificate directory), this implies you can trust the signed peer certificate as well. Certificates signed by multiple CAs A CA certificate can be signed by another CA. For example, an application certificate could be signed by the CA for the finance department of Progress Software, which in turn is signed by a self-signed commercial CA. Figure A.2, "A Certificate Chain of Depth 3" shows what this certificate chain looks like. Figure A.2. A Certificate Chain of Depth 3 Trusted CAs An application can accept a peer certificate, provided it trusts at least one of the CA certificates in the signing chain. A.4. Special Requirements on HTTPS Certificates Overview The HTTPS specification mandates that HTTPS clients must be capable of verifying the identity of the server. This can potentially affect how you generate your X.509 certificates. The mechanism for verifying the server identity depends on the type of client. Some clients might verify the server identity by accepting only those server certificates signed by a particular trusted CA. In addition, clients can inspect the contents of a server certificate and accept only the certificates that satisfy specific constraints. In the absence of an application-specific mechanism, the HTTPS specification defines a generic mechanism, known as the HTTPS URL integrity check , for verifying the server identity. This is the standard mechanism used by Web browsers. HTTPS URL integrity check The basic idea of the URL integrity check is that the server certificate's identity must match the server host name. This integrity check has an important impact on how you generate X.509 certificates for HTTPS: the certificate identity (usually the certificate subject DN's common name) must match the host name on which the HTTPS server is deployed . The URL integrity check is designed to prevent man-in-the-middle attacks. Reference The HTTPS URL integrity check is specified by RFC 2818, published by the Internet Engineering Task Force (IETF) at http://www.ietf.org/rfc/rfc2818.txt . How to specify the certificate identity The certificate identity used in the URL integrity check can be specified in one of the following ways: Using commonName Using subectAltName Using commonName The usual way to specify the certificate identity (for the purpose of the URL integrity check) is through the Common Name (CN) in the subject DN of the certificate. For example, if a server supports secure TLS connections at the following URL: The corresponding server certificate would have the following subject DN: Where the CN has been set to the host name, www.redhat.com . For details of how to set the subject DN in a new certificate, see Section A.5, "Creating Your Own Certificates" . Using subjectAltName (multi-homed hosts) Using the subject DN's Common Name for the certificate identity has the disadvantage that only one host name can be specified at a time. If you deploy a certificate on a multi-homed host, however, you might find it is practical to allow the certificate to be used with any of the multi-homed host names. In this case, it is necessary to define a certificate with multiple, alternative identities, and this is only possible using the subjectAltName certificate extension. For example, if you have a multi-homed host that supports connections to either of the following host names: Then you can define a subjectAltName that explicitly lists both of these DNS host names. If you generate your certificates using the openssl utility, edit the relevant line of your openssl.cnf configuration file to specify the value of the subjectAltName extension, as follows: Where the HTTPS protocol matches the server host name against either of the DNS host names listed in the subjectAltName (the subjectAltName takes precedence over the Common Name). The HTTPS protocol also supports the wildcard character, \* , in host names. For example, you can define the subjectAltName as follows: This certificate identity matches any three-component host name in the domain jboss.org . Warning You must never use the wildcard character in the domain name (and you must take care never to do this accidentally by forgetting to type the dot, . , delimiter in front of the domain name). For example, if you specified *jboss.org , your certificate could be used on *any* domain that ends in the letters jboss . A.5. Creating Your Own Certificates Abstract This chapter describes the techniques and procedures to set up your own private Certificate Authority (CA) and to use this CA to generate and sign your own certificates. Warning Creating and managing your own certificates requires an expert knowledge of security. While the procedures described in this chapter can be convenient for generating your own certificates for demonstration and testing environments, it is not recommended to use these certificates in a production environment. A.5.1. Install the OpenSSL Utilities Installing OpenSSL on RHEL and Fedora platforms On Red Hat Enterprise Linux (RHEL) 5 and 6 and Fedora platforms, are made available as an RPM package. To install OpenSSL, enter the following command (executed with administrator privileges): Source code distribution The source distribution of OpenSSL is available from http://www.openssl.org/docs . The OpenSSL project provides source code distributions only . You cannot download a binary install of the OpenSSL utilities from the OpenSSL Web site. A.5.2. Set Up a Private Certificate Authority Overview If you choose to use a private CA you need to generate your own certificates for your applications to use. The OpenSSL project provides free command-line utilities for setting up a private CA, creating signed certificates, and adding the CA to your Java keystore. Warning Setting up a private CA for a production environment requires a high level of expertise and extra care must be taken to protect the certificate store from external threats. Steps to set up a private Certificate Authority To set up your own private Certificate Authority: Create the directory structure for the CA, as follows: Using a text editor, create the file, X509CA /openssl.cfg , and add the following contents to this file: Example A.1. OpenSSL Configuration Important The preceding openssl.cfg configuration file is provided as a demonstration only . In a production environment, this configuration file would need to be carefully elaborated by an engineer with a high level of security expertise, and actively maintained to protect against evolving security threats. Initialize the demoCA/serial file, which must have the initial contents 01 (zero one). Enter the following command: Initialize the demoCA/index.txt , which must initially be completely empty. Enter the following command: Create a new self-signed CA certificate and private key with the command: You are prompted for a pass phrase for the CA private key and details of the CA distinguished name as shown in Example A.2, "Creating a CA Certificate" . Example A.2. Creating a CA Certificate Note The security of the CA depends on the security of the private key file and the private key pass phrase used in this step. You must ensure that the file names and location of the CA certificate and private key, cacert.pem and cakey.pem , are the same as the values specified in openssl.cfg . A.5.3. Create a CA Trust Store File Overview A trust store file is commonly required on the client side of an SSL/TLS connection, in order to verify a server's identity. A trust store file can also be used to check digital signatures (for example, to check that a signature was made using the private key corresponding to one of the trusted certificates in the trust store file). Steps to create a CA trust store To add one of more CA certificates to a trust store file: Assemble the collection of trusted CA certificates that you want to deploy. The trusted CA certificates can be obtained from public CAs or private CAs. The trusted CA certificates can be in any format that is compatible with the Java keystore utility; for example, PEM format. All you need are the certificates themselves-the private keys and passwords are not required. Add a CA certificate to the trust store using the keytool -import command. Enter the following command to add the CA certificate, cacert.pem , in PEM format, to a JKS trust store. Where truststore.ts is a keystore file containing CA certificates. If this file does not already exist, the keytool command creates it. The CAAlias is a convenient identifier for the imported CA certificate and StorePass is the password required to access the keystore file. Repeat the step to add all of the CA certificates to the trust store. A.5.4. Generate and Sign a New Certificate Overview In order for a certificate to be useful in the real world, it must be signed by a CA, which vouches for the authenticity of the certificate. This facilitates a scalable solution for certificate verification, because it means that a single CA certificate can be used to verify a large collection of certificates. Steps to generate and sign a new certificate To generate and sign a new certificate, using your own private CA, perform the following steps: Generate a certificate and private key pair using the keytool -genkeypair command, as follows: Because the specified keystore, alice.ks , did not exist prior to issuing the command implicitly creates a new keystore and sets its password to StorePass . The -dname and -validity flags define the contents of the newly created X.509 certificate. Note When specifying the certificate's Distinguished Name (through the -dname parameter), you must be sure to observe any policy constraints specified in the openssl.cfg file. If those policy constraints are not heeded, you will not be able to sign the certificate using the CA (in the steps). Note It is essential to generate the key pair with the -keyalg RSA option (or a key algorithm of similar strength). The default key algorithm uses a combination of DSA encryption and SHA-1 signature. But the SHA-1 algorithm is no longer regarded as sufficiently secure and modern Web browsers will reject certificates signed using SHA-1. When you select the RSA key algorithm, the keytool utility uses an SHA-2 algorithm instead. Create a certificate signing request using the keystore -certreq command. Create a new certificate signing request for the alice.ks certificate and export it to the alice_csr.pem file, as follows: Sign the CSR using the openssl ca command. Sign the CSR for the Alice certificate, using your private CA, as follows: You will prompted to enter the CA private key pass phrase you used when creating the CA (in the section called "Steps to set up a private Certificate Authority" ). For more details about the openssl ca command see http://www.openssl.org/docs/apps/ca.html# . Convert the signed certificate to PEM only format using the openssl x509 command with the -outform option set to PEM . Enter the following command: Concatenate the CA certificate file and the converted, signed certificate file to form a certificate chain. For example, on Linux and UNIX platforms, you can concatenate the CA certificate file and the signed Alice certificate, alice_signed.pem , as follows: Import the new certificate's full certificate chain into the Java keystore using the keytool -import command. Enter the following command:
[ "https://www.redhat.com/secure", "C=IE,ST=Co. Dublin,L=Dublin,O=RedHat, OU=System,CN=www.redhat.com", "www.redhat.com www.jboss.org", "subjectAltName=DNS:www.redhat.com,DNS:www.jboss.org", "subjectAltName=DNS:*.jboss.org", "install openssl", "X509CA /demoCA X509CA /demoCA/private X509CA /demoCA/certs X509CA /demoCA/newcerts X509CA /demoCA/crl", "# SSLeay example configuration file. This is mostly being used for generation of certificate requests. # RANDFILE = ./.rnd #################################################################### [ req ] default_bits = 2048 default_keyfile = keySS.pem distinguished_name = req_distinguished_name encrypt_rsa_key = yes default_md = sha1 [ req_distinguished_name ] countryName = Country Name (2 letter code) organizationName = Organization Name (eg, company) commonName = Common Name (eg, YOUR name) #################################################################### [ ca ] default_ca = CA_default # The default ca section #################################################################### [ CA_default ] dir = ./demoCA # Where everything is kept certs = USDdir/certs # Where the issued certs are kept crl_dir = USDdir/crl # Where the issued crl are kept database = USDdir/index.txt # database index file. #unique_subject = no # Set to 'no' to allow creation of # several certificates with same subject. new_certs_dir = USDdir/newcerts # default place for new certs. certificate = USDdir/cacert.pem # The CA certificate serial = USDdir/serial # The current serial number crl = USDdir/crl.pem # The current CRL private_key = USDdir/private/cakey.pem# The private key RANDFILE = USDdir/private/.rand # private random number file name_opt = ca_default # Subject Name options cert_opt = ca_default # Certificate field options default_days = 365 # how long to certify for default_crl_days = 30 # how long before next CRL default_md = md5 # which md to use. preserve = no # keep passed DN ordering policy = policy_anything [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional", "echo 01 > demoCA/serial", "touch demoCA/index.txt", "openssl req -x509 -new -config openssl.cfg -days 365 -out demoCA/cacert.pem -keyout demoCA/private/cakey.pem", "Generating a 2048 bit RSA private key ...........................................................................+++ .................+++ writing new private key to 'demoCA/private/cakey.pem' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) []:DE Organization Name (eg, company) []:Red Hat Common Name (eg, YOUR name) []:Scooby Doo", "keytool -import -file cacert.pem -alias CAAlias -keystore truststore.ts -storepass StorePass", "keytool -genkeypair -keyalg RSA -dname \"CN=Alice, OU=Engineering, O=Red Hat, ST=Dublin, C=IE\" -validity 365 -alias alice -keypass KeyPass -keystore alice.ks -storepass StorePass", "keytool -certreq -alias alice -file alice_csr.pem -keypass KeyPass -keystore alice.ks -storepass StorePass", "openssl ca -config openssl.cfg -days 365 -in alice_csr.pem -out alice_signed.pem", "openssl x509 -in alice_signed.pem -out alice_signed.pem -outform PEM", "cat demoCA/cacert.pem alice_signed.pem > alice.chain", "keytool -import -file alice.chain -keypass KeyPass -keystore alice.ks -storepass StorePass" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/ManageCerts
3.2.2.2. The Operating System
3.2.2.2. The Operating System It is difficult to determine how much processing power is consumed by the operating system. The reason for this is that operating systems use a mixture of process-level and system-level code to perform their work. While, for example, it is easy to use a process monitor to determine what the process running a daemon or service is doing, it is not so easy to determine how much processing power is being consumed by system-level I/O-related processing (which is normally done within the context of the process requesting the I/O.) In general, it is possible to divide this kind of operating system overhead into two types: Operating system housekeeping Process-related activities Operating system housekeeping includes activities such as process scheduling and memory management, while process-related activities include any processes that support the operating system itself, such as processes handling system-wide event logging or I/O cache flushing.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-bandwidth-processing-consumers-os
Chapter 10. Customizing the system in the installer
Chapter 10. Customizing the system in the installer During the customization phase of the installation, you must perform certain configuration tasks to enable the installation of Red Hat Enterprise Linux. These tasks include: Configuring the storage and assign mount points. Selecting a base environment with software to be installed. Setting a password for the root user or creating a local user. Optionally, you can further customize the system, for example, by configuring system settings and connecting the host to a network. 10.1. Setting the installer language You can select the language to be used by the installation program before starting the installation. Prerequisites You have created installation media. You have specified an installation source if you are using the Boot ISO image file. You have booted the installation. Procedure After you select Install Red hat Enterprise Linux option from the boot menu, the Welcome to Red Hat Enterprise Screen appears. From the left-hand pane of the Welcome to Red Hat Enterprise Linux window, select a language. Alternatively, search the preferred language by using the text box. Note A language is pre-selected by default. If network access is configured, that is, if you booted from a network server instead of local media, the pre-selected language is determined by the automatic location detection feature of the GeoIP module. If you use the inst.lang= option on the boot command line or in your PXE server configuration, then the language that you define with the boot option is selected. From the right-hand pane of the Welcome to Red Hat Enterprise Linux window, select a location specific to your region. Click Continue to proceed to the graphical installations window. If you are installing a pre-release version of Red Hat Enterprise Linux, a warning message is displayed about the pre-release status of the installation media. To continue with the installation, click I want to proceed , or To quit the installation and reboot the system, click I want to exit . 10.2. Configuring the storage devices You can install Red Hat Enterprise Linux on a large variety of storage devices. You can configure basic, locally accessible, storage devices in the Installation Destination window. Basic storage devices directly connected to the local system, such as disks and solid-state drives, are displayed in the Local Standard Disks section of the window. On 64-bit IBM Z, this section contains activated Direct Access Storage Devices (DASDs). Warning A known issue prevents DASDs configured as HyperPAV aliases from being automatically attached to the system after the installation is complete. These storage devices are available during the installation, but are not immediately accessible after you finish installing and reboot. To attach HyperPAV alias devices, add them manually to the /etc/dasd.conf configuration file of the system. 10.2.1. Configuring installation destination You can use the Installation Destination window to configure the storage options, for example, the disks that you want to use as the installation target for your Red Hat Enterprise Linux installation. You must select at least one disk. Prerequisites The Installation Summary window is open. Ensure to back up your data if you plan to use a disk that already contains data. For example, if you want to shrink an existing Microsoft Windows partition and install Red Hat Enterprise Linux as a second system, or if you are upgrading a release of Red Hat Enterprise Linux. Manipulating partitions always carries a risk. For example, if the process is interrupted or fails for any reason data on the disk can be lost. Procedure From the Installation Summary window, click Installation Destination . Perform the following operations in the Installation Destination window opens: From the Local Standard Disks section, select the storage device that you require; a white check mark indicates your selection. Disks without a white check mark are not used during the installation process; they are ignored if you choose automatic partitioning, and they are not available in manual partitioning. The Local Standard Disks shows all locally available storage devices, for example, SATA, IDE and SCSI disks, USB flash and external disks. Any storage devices connected after the installation program has started are not detected. If you use a removable drive to install Red Hat Enterprise Linux, your system is unusable if you remove the device. Optional: Click the Refresh link in the lower right-hand side of the window if you want to configure additional local storage devices to connect new disks. The Rescan Disks dialog box opens. Click Rescan Disks and wait until the scanning process completes. All storage changes that you make during the installation are lost when you click Rescan Disks . Click OK to return to the Installation Destination window. All detected disks including any new ones are displayed under the Local Standard Disks section. Optional: Click Add a disk to add a specialized storage device. The Storage Device Selection window opens and lists all storage devices that the installation program has access to. Optional: Under Storage Configuration , select the Automatic radio button for automatic partitioning. You can also configure custom partitioning. For more details, see Configuring manual partitioning . Optional: Select I would like to make additional space available to reclaim space from an existing partitioning layout. For example, if a disk you want to use already has a different operating system and you want to make this system's partitions smaller to allow more room for Red Hat Enterprise Linux. Optional: Select Encrypt my data to encrypt all partitions except the ones needed to boot the system (such as /boot ) using Linux Unified Key Setup (LUKS). Encrypting your disk to add an extra layer of security. Click Done . The Disk Encryption Passphrase dialog box opens. Type your passphrase in the Passphrase and Confirm fields. Click Save Passphrase to complete disk encryption. Warning If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, if you perform a Kickstart installation, you can save encryption passphrases and create backup encryption passphrases during the installation. For more information, see the Automatically installing RHEL document. Optional: Click the Full disk summary and bootloader link in the lower left-hand side of the window to select which storage device contains the boot loader. For more information, see Configuring boot loader . In most cases it is sufficient to leave the boot loader in the default location. Some configurations, for example, systems that require chain loading from another boot loader require the boot drive to be specified manually. Click Done . Optional: The Reclaim Disk Space dialog box appears if you selected automatic partitioning and the I would like to make additional space available option, or if there is not enough free space on the selected disks to install Red Hat Enterprise Linux. It lists all configured disk devices and all partitions on those devices. The dialog box displays information about the minimal disk space the system needs for an installation with the currently selected package set and how much space you have reclaimed. To start the reclaiming process: Review the displayed list of available storage devices. The Reclaimable Space column shows how much space can be reclaimed from each entry. Select a disk or partition to reclaim space. Use the Shrink button to use free space on a partition while preserving the existing data. Use the Delete button to delete that partition or all partitions on a selected disk including existing data. Use the Delete all button to delete all existing partitions on all disks including existing data and make this space available to install Red Hat Enterprise Linux. Click Reclaim space to apply the changes and return to graphical installations. No disk changes are made until you click Begin Installation on the Installation Summary window. The Reclaim Space dialog only marks partitions for resizing or deletion; no action is performed. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 10.2.2. Special cases during installation destination configuration Following are some special cases to consider when you are configuring installation destinations: Some BIOS types do not support booting from a RAID card. In these instances, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. It is necessary to use an internal disk for partition creation with problematic RAID cards. A /boot partition is also necessary for software RAID setups. If you choose to partition your system automatically, you should manually edit your /boot partition. To configure the Red Hat Enterprise Linux boot loader to chain load from a different boot loader, you must specify the boot drive manually by clicking the Full disk summary and bootloader link from the Installation Destination window. When you install Red Hat Enterprise Linux on a system with both multipath and non-multipath storage devices, the automatic partitioning layout in the installation program creates volume groups that contain a mix of multipath and non-multipath devices. This defeats the purpose of multipath storage. Select either multipath or non-multipath devices on the Installation Destination window. Alternatively, proceed to manual partitioning. 10.2.3. Configuring boot loader Red Hat Enterprise Linux uses GRand Unified Bootloader version 2 ( GRUB2 ) as the boot loader for AMD64 and Intel 64, IBM Power Systems, and ARM. For 64-bit IBM Z, the zipl boot loader is used. The boot loader is the first program that runs when the system starts and is responsible for loading and transferring control to an operating system. GRUB2 can boot any compatible operating system (including Microsoft Windows) and can also use chain loading to transfer control to other boot loaders for unsupported operating systems. Warning Installing GRUB2 may overwrite your existing boot loader. If an operating system is already installed, the Red Hat Enterprise Linux installation program attempts to automatically detect and configure the boot loader to start the other operating system. If the boot loader is not detected, you can manually configure any additional operating systems after you finish the installation. If you are installing a Red Hat Enterprise Linux system with more than one disk, you might want to manually specify the disk where you want to install the boot loader. Procedure From the Installation Destination window, click the Full disk summary and bootloader link. The Selected Disks dialog box opens. The boot loader is installed on the device of your choice, or on a UEFI system; the EFI system partition is created on the target device during guided partitioning. To change the boot device, select a device from the list and click Set as Boot Device . You can set only one device as the boot device. To disable a new boot loader installation, select the device currently marked for boot and click Do not install boot loader . This ensures GRUB2 is not installed on any device. Warning If you choose not to install a boot loader, you cannot boot the system directly and you must use another boot method, such as a standalone commercial boot loader application. Use this option only if you have another way to boot your system. The boot loader may also require a special partition to be created, depending on if your system uses BIOS or UEFI firmware, or if the boot drive has a GUID Partition Table (GPT) or a Master Boot Record (MBR, also known as msdos ) label. If you use automatic partitioning, the installation program creates the partition. 10.2.4. Storage device selection The storage device selection window lists all storage devices that the installation program can access. Depending on your system and available hardware, some tabs might not be displayed. The devices are grouped under the following tabs: Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. The installation program only detects multipath storage devices with serial numbers that are 16 or 32 characters long. Other SAN Devices Devices available on a Storage Area Network (SAN). Firmware RAID Storage devices attached to a firmware RAID controller. NVDIMM Devices Under specific circumstances, Red Hat Enterprise Linux 8 can boot and run from (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures. IBM Z Devices Storage devices, or Logical Units (LUNs), DASD, attached through the zSeries Linux FCP (Fiber Channel Protocol) driver. 10.2.5. Filtering storage devices In the storage device selection window you can filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN). Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the Search by tab to search by port, target, LUN, or WWID. Searching by WWID or LUN requires additional values in the corresponding input text fields. Select the option that you require from the Search drop-down menu. Click Find to start the search. Each device is presented on a separate row with a corresponding check box. Select the check box to enable the device that you require during the installation process. Later in the installation process you can choose to install Red Hat Enterprise Linux on any of the selected devices, and you can choose to mount any of the other selected devices as part of the installed system automatically. Selected devices are not automatically erased by the installation process and selecting a device does not put the data stored on the device at risk. Note You can add devices to the system after installation by modifying the /etc/fstab file. Click Done to return to the Installation Destination window. Any storage devices that you do not select are hidden from the installation program entirely. To chain load the boot loader from a different boot loader, select all the devices present. 10.2.6. Using advanced storage options To use an advanced storage device, you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre Channel over Ethernet) SAN (Storage Area Network). To use iSCSI storage devices for the installation, the installation program must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a user name and password for Challenge Handshake Authentication Protocol (CHAP) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached (reverse CHAP), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP. Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the user name and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps to add all required iSCSI storage. You cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. 10.2.6.1. Discovering and starting an iSCSI session The Red Hat Enterprise Linux installer can discover and log in to iSCSI disks in two ways: iSCSI Boot Firmware Table (iBFT) When the installer starts, it checks if the BIOS or add-on boot ROMs of the system support iBFT. It is a BIOS extension for systems that can boot from iSCSI. If the BIOS supports iBFT, the installer reads the iSCSI target information for the configured boot disk from the BIOS and logs in to this target, making it available as an installation target. To automatically connect to an iSCSI target, activate a network device for accessing the target. To do so, use the ip=ibft boot option. For more information, see Network boot options . Discover and add iSCSI targets manually You can discover and start an iSCSI session to identify available iSCSI targets (network storage devices) in the installer's graphical user interface. Prerequisites The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add iSCSI target . The Add iSCSI Storage Target window opens. Important You cannot place the /boot partition on iSCSI targets that you have manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, you can remove the /boot partition restriction using the inst.nonibftiscsiboot installer boot option. Enter the IP address of the iSCSI target in the Target IP Address field. Type a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN entry contains the following information: The string iqn. (note the period). A date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. Your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage . A colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example :diskarrays-sn-a8675309 . A complete IQN is as follows: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 . The installation program pre populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information about IQNs, see 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from tools.ietf.org and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from tools.ietf.org. Select the Discovery Authentication Type drop-down menu to specify the type of authentication to use for iSCSI discovery. The following options are available: No credentials CHAP pair CHAP pair and a reverse pair Do one of the following: If you selected CHAP pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password fields. If you selected CHAP pair and a reverse pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password field, and the user name and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Optional: Select the Bind targets to network interfaces check box. Click Start Discovery . The installation program attempts to discover an iSCSI target based on the information provided. If discovery succeeds, the Add iSCSI Storage Target window displays a list of all iSCSI nodes discovered on the target. Select the check boxes for the node that you want to use for installation. The Node login authentication type menu contains the same options as the Discovery Authentication Type menu. However, if you need credentials for discovery authentication, use the same credentials to log in to a discovered node. Click the additional Use the credentials from discovery drop-down menu. When you provide the proper credentials, the Log In button becomes available. Click Log In to initiate an iSCSI session. While the installer uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. The installer then copies this database to the installed system and marks any iSCSI targets that are not used for root partition, so that the system automatically logs in to them when it starts. If the root partition is placed on an iSCSI target, initrd logs into this target and the installer does not include this target in start up scripts to avoid multiple attempts to log into the same target. 10.2.6.2. Configuring FCoE parameters You can discover the FCoE (Fibre Channel over Ethernet) devices from the Installation Destination window by configuring the FCoE parameters accordingly. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add FCoE SAN . A dialog box opens for you to configure network interfaces for discovering FCoE storage devices. Select a network interface that is connected to an FCoE switch in the NIC drop-down menu. Click Add FCoE disk(s) to scan the network for SAN devices. Select the required check boxes: Use DCB: Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Select the check box to enable or disable the installation program's awareness of DCB. Enable this option only for network interfaces that require a host-based DCBX client. For configurations on interfaces that use a hardware DCBX client, disable the check box. Use auto vlan: Auto VLAN is enabled by default and indicates whether VLAN discovery should be performed. If this check box is enabled, then the FIP (FCoE Initiation Protocol) VLAN discovery protocol runs on the Ethernet interface when the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs are automatically created and FCoE instances are created on the VLAN interfaces. Discovered FCoE devices are displayed under the Other SAN Devices tab in the Installation Destination window. 10.2.6.3. Configuring DASD storage devices You can discover and configure the DASD storage devices from the Installation Destination window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add DASD ECKD . The Add DASD Storage Target dialog box opens and prompts you to specify a device number, such as 0.0.0204 , and attach additional DASDs that were not detected when the installation started. Type the device number of the DASD that you want to attach in the Device number field. Click Start Discovery . If a DASD with the specified device number is found and if it is not already attached, the dialog box closes and the newly-discovered drives appear in the list of drives. You can then select the check boxes for the required devices and click Done . The new DASDs are available for selection, marked as DASD device 0.0. xxxx in the Local Standard Disks section of the Installation Destination window. If you entered an invalid device number, or if the DASD with the specified device number is already attached to the system, an error message appears in the dialog box, explaining the error and prompting you to try again with a different device number. Additional resources Preparing an ECKD type DASD for use 10.2.6.4. Configuring FCP devices FCP devices enable 64-bit IBM Z to use SCSI devices rather than, or in addition to, Direct Access Storage Device (DASD) devices. FCP devices provide a switched fabric topology that enables 64-bit IBM Z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. Prerequisites The Installation Summary window is open. For an FCP-only installation, you have removed the DASD= option from the CMS configuration file or the rd.dasd= option from the parameter file to indicate that no DASD is present. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add ZFCP LUN . The Add zFCP Storage Target dialog box opens allowing you to add a FCP (Fibre Channel Protocol) storage device. 64-bit IBM Z requires that you enter any FCP device manually so that the installation program can activate FCP LUNs. You can enter FCP devices either in the graphical installation, or as a unique parameter entry in the parameter or CMS configuration file. The values that you enter must be unique to each site that you configure. Type the 4 digit hexadecimal device number in the Device number field. When installing RHEL-8.6 or older releases or if the zFCP device is not configured in NPIV mode, or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter, provide the following values: Type the 16 digit hexadecimal World Wide Port Number (WWPN) in the WWPN field. Type the 16 digit hexadecimal FCP LUN identifier in the LUN field. Click Start Discovery to connect to the FCP device. The newly-added devices are displayed in the IBM Z tab of the Installation Destination window. Use only lower-case letters in hex values. If you enter an incorrect value and click Start Discovery , the installation program displays a warning. You can edit the configuration information and retry the discovery attempt. For more information about these values, consult the hardware documentation and check with your system administrator. 10.2.7. Installing to an NVDIMM device Non-Volatile Dual In-line Memory Module (NVDIMM) devices combine the performance of RAM with disk-like data persistence when no power is supplied. Under specific circumstances, Red Hat Enterprise Linux 8 can boot and run from NVDIMM devices. 10.2.7.1. Criteria for using an NVDIMM device as an installation target You can install Red Hat Enterprise Linux 8 to Non-Volatile Dual In-line Memory Module (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures, supported by the nd_pmem driver. Conditions for using an NVDIMM device as storage To use an NVDIMM device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The NVDIMM device is configured to sector mode. The installation program can reconfigure NVDIMM devices to this mode. The NVDIMM device must be supported by the nd_pmem driver. Conditions for booting from an NVDIMM Device Booting from an NVDIMM device is possible under the following conditions: All conditions for using the NVDIMM device as storage are satisfied. The system uses UEFI. The NVDIMM device must be supported by firmware available on the system, or by an UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The NVDIMM device must be made available under a namespace. Utilize the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. 10.2.7.2. Configuring an NVDIMM device using the graphical installation mode A Non-Volatile Dual In-line Memory Module (NVDIMM) device must be properly configured for use by Red Hat Enterprise Linux 8 using the graphical installation. Warning Reconfiguration of a NVDIMM device process destroys any data stored on the device. Prerequisites A NVDIMM device is present on the system and satisfies all the other conditions for usage as an installation target. The installation has booted and the Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the NVDIMM Devices tab. To reconfigure a device, select it from the list. If a device is not listed, it is not in sector mode. Click Reconfigure NVDIMM . A reconfiguration dialog opens. Enter the sector size that you require and click Start Reconfiguration . The supported sector sizes are 512 and 4096 bytes. When reconfiguration completes click OK . Select the device check box. Click Done to return to the Installation Destination window. The NVDIMM device that you reconfigured is displayed in the Specialized & Network Disks section. Click Done to return to the Installation Summary window. The NVDIMM device is now available for you to select as an installation target. Additionally, if the device meets the requirements for booting, you can set the device as a boot device. 10.3. Configuring the root user and creating local accounts 10.3.1. Configuring a root password You must configure a root password to finish the installation process and to log in to the administrator (also known as superuser or root) account that is used for system administration tasks. These tasks include installing and updating software packages and changing system-wide configuration such as network and firewall settings, storage options, and adding or modifying users, groups and file permissions. To gain root privileges to the installed systems, you can either use a root account or create a user account with administrative privileges (member of the wheel group). The root account is always created during the installation. Switch to the administrator account only when you need to perform a task that requires administrator access. Warning The root account has complete control over the system. If unauthorized personnel gain access to the account, they can access or delete users' personal files. Procedure From the Installation Summary window, select User Settings > Root Password . The Root Password window opens. Type your password in the Root Password field. The requirements for creating a strong root password are: Must be at least eight characters long May contain numbers, letters (upper and lower case) and symbols Is case-sensitive Type the same password in the Confirm field. Click Done to confirm your root password and return to the Installation Summary window. If you proceed with a weak password, you must click Done twice. 10.3.2. Creating a user account Create a user account to finish the installation. If you do not create a user account, you must log in to the system as root directly, which is not recommended. Procedure On the Installation Summary window, select User Settings > User Creation . The Create User window opens. Type the user account name in to the Full name field, for example: John Smith. Type the username in to the User name field, for example: jsmith. The User name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the Full name . Select the Make this user administrator check box if the user requires administrative rights (the installation program adds the user to the wheel group ). An administrator user can use the sudo command to perform tasks that are only available to root using the user password, instead of the root password. This may be more convenient, but it can also cause a security risk. Select the Require a password to use this account check box. If you give administrator privileges to a user, ensure the account is password protected. Never give a user administrator privileges without assigning a password to the account. Type a password into the Password field. Type the same password into the Confirm password field. Click Done to apply the changes and return to the Installation Summary window. 10.3.3. Editing advanced user settings This procedure describes how to edit the default settings for the user account in the Advanced User Configuration dialog box. Procedure On the Create User window, click Advanced . Edit the details in the Home directory field, if required. The field is populated by default with /home/ username . In the User and Groups IDs section you can: Select the Specify a user ID manually check box and use + or - to enter the required value. The default value is 1000. User IDs (UIDs) 0-999 are reserved by the system so they cannot be assigned to a user. Select the Specify a group ID manually check box and use + or - to enter the required value. The default group name is the same as the user name, and the default Group ID (GID) is 1000. GIDs 0-999 are reserved by the system so they can not be assigned to a user group. Specify additional groups as a comma-separated list in the Group Membership field. Groups that do not already exist are created; you can specify custom GIDs for additional groups in parentheses. If you do not specify a custom GID for a new group, the new group receives a GID automatically. The user account created always has one default group membership (the user's default group with an ID set in the Specify a group ID manually field). Click Save Changes to apply the updates and return to the Create User window. 10.4. Configuring manual partitioning You can use manual partitioning to configure your disk partitions and mount points and define the file system that Red Hat Enterprise Linux is installed on. Before installation, you should consider whether you want to use partitioned or unpartitioned disk devices. For more information about the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM, see the Red Hat Knowledgebase solution advantages and disadvantages to using partitioning on LUNs . You have different partitioning and storage options available, including Standard Partitions , LVM , and LVM thin provisioning . These options provide various benefits and configurations for managing your system's storage effectively. Standard partition A standard partition contains a file system or swap space. Standard partitions are most commonly used for /boot and the BIOS Boot and EFI System partitions . You can use the LVM logical volumes in most other uses. LVM Choosing LVM (or Logical Volume Management) as the device type creates an LVM logical volume. LVM improves performance when using physical disks, and it allows for advanced setups such as using multiple physical disks for one mount point, and setting up software RAID for increased performance, reliability, or both. LVM thin provisioning Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can dynamically expand the pool when needed for cost-effective allocation of storage space. An installation of Red Hat Enterprise Linux requires a minimum of one partition but uses at least the following partitions or volumes: / , /home , /boot , and swap . You can also create additional partitions and volumes as you require. To prevent data loss it is recommended that you back up your data before proceeding. If you are upgrading or creating a dual-boot system, you should back up any data you want to keep on your storage devices. 10.4.1. Recommended partitioning scheme Create separate file systems at the following mount points. However, if required, you can also create the file systems at /usr , /var , and /tmp mount points. /boot / (root) /home swap /boot/efi PReP This partition scheme is recommended for bare metal deployments and it does not apply to virtual and cloud deployments. /boot partition - recommended size at least 1 GiB The partition mounted on /boot contains the operating system kernel, which allows your system to boot Red Hat Enterprise Linux 8, along with files used during the bootstrap process. Due to the limitations of most firmwares, create a small partition to hold these. In most scenarios, a 1 GiB boot partition is adequate. Unlike other mount points, using an LVM volume for /boot is not possible - /boot must be located on a separate disk partition. If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In such a case, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. Warning Normally, the /boot partition is created automatically by the installation program. However, if the / (root) partition is larger than 2 TiB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TiB to boot the machine successfully. Ensure the /boot partition is located within the first 2 TB of the disk while manual partitioning. Placing the /boot partition beyond the 2 TB boundary might result in a successful installation, but the system fails to boot because BIOS cannot read the /boot partition beyond this limit. root - recommended size of 10 GiB This is where " / ", or the root directory, is located. The root directory is the top-level of the directory structure. By default, all files are written to this file system unless a different file system is mounted in the path being written to, for example, /boot or /home . While a 5 GiB root file system allows you to install a minimal installation, it is recommended to allocate at least 10 GiB so that you can install as many package groups as you want. Do not confuse the / directory with the /root directory. The /root directory is the home directory of the root user. The /root directory is sometimes referred to as slash root to distinguish it from the root directory. /home - recommended size at least 1 GiB To store user data separately from system data, create a dedicated file system for the /home directory. Base the file system size on the amount of data that is stored locally, number of users, and so on. You can upgrade or reinstall Red Hat Enterprise Linux 8 without erasing user data files. If you select automatic partitioning, it is recommended to have at least 55 GiB of disk space available for the installation, to ensure that the /home file system is created. swap partition - recommended size at least 1 GiB Swap file systems support virtual memory; data is written to a swap file system when there is not enough RAM to store the data your system is processing. Swap size is a function of system memory workload, not total system memory and therefore is not equal to the total system memory size. It is important to analyze what applications a system will be running and the load those applications will serve in order to determine the system memory workload. Application providers and developers can provide guidance. When the system runs out of swap space, the kernel terminates processes as the system RAM memory is exhausted. Configuring too much swap space results in storage devices being allocated but idle and is a poor use of resources. Too much swap space can also hide memory leaks. The maximum size for a swap partition and other additional information can be found in the mkswap(8) manual page. The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and if you want sufficient memory for your system to hibernate. If you let the installation program partition your system automatically, the swap partition size is established using these guidelines. Automatic partitioning setup assumes hibernation is not in use. The maximum size of the swap partition is limited to 10 percent of the total size of the disk, and the installation program cannot create swap partitions more than 1TiB. To set up enough swap space to allow for hibernation, or if you want to set the swap partition size to more than 10 percent of the system's storage space, or more than 1TiB, you must edit the partitioning layout manually. Table 10.1. Recommended system swap space Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation Less than 2 GiB 2 times the amount of RAM 3 times the amount of RAM 2 GiB - 8 GiB Equal to the amount of RAM 2 times the amount of RAM 8 GiB - 64 GiB 4 GiB to 0.5 times the amount of RAM 1.5 times the amount of RAM More than 64 GiB Workload dependent (at least 4GiB) Hibernation not recommended /boot/efi partition - recommended size of 200 MiB UEFI-based AMD64, Intel 64, and 64-bit ARM require a 200 MiB EFI system partition. The recommended minimum size is 200 MiB, the default size is 600 MiB, and the maximum size is 600 MiB. BIOS systems do not require an EFI system partition. At the border between each range, for example, a system with 2 GiB, 8 GiB, or 64 GiB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space can lead to better performance. Distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance. Many systems have more partitions and volumes than the minimum required. Choose partitions based on your particular system needs. If you are unsure about configuring partitions, accept the automatic default partition layout provided by the installation program. Note Only assign storage capacity to those partitions you require immediately. You can allocate free space at any time, to meet needs as they occur. PReP boot partition - recommended size of 4 to 8 MiB When installing Red Hat Enterprise Linux on IBM Power System servers, the first partition of the disk should include a PReP boot partition. This contains the GRUB boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 10.4.2. Supported hardware storage It is important to understand how storage technologies are configured and how support for them may have changed between major versions of Red Hat Enterprise Linux. Hardware RAID Any RAID functions provided by the mainboard of your computer, or attached controller cards, need to be configured before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. Software RAID On systems with more than one disk, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than the dedicated hardware. Note When a pre-existing RAID array's member devices are all unpartitioned disks/drives, the installation program treats the array as a disk and there is no method to remove the array. USB Disks You can connect and configure external USB storage after installation. Most devices are recognized by the kernel, but some devices may not be recognized. If it is not a requirement to configure these disks during installation, disconnect them to avoid potential problems. NVDIMM devices To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied: Version of Red Hat Enterprise Linux is 7.6 or later. The architecture of the system is Intel 64 or AMD64. The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode. The device must be supported by the nd_pmem driver. Booting from an NVDIMM device is possible under the following additional conditions: The system uses UEFI. The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The device must be made available under a namespace. To take advantage of the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. Note The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. Considerations for Intel BIOS RAID Sets Red Hat Enterprise Linux uses mdraid for installing on Intel BIOS RAID sets. These sets are automatically detected during the boot process and their device node paths can change across several booting processes. Replace device node paths (such as /dev/sda ) with file system labels or device UUIDs. You can find the file system labels and device UUIDs using the blkid command. 10.4.3. Starting manual partitioning You can partition the disks based on your requirements by using manual partitioning. Prerequisites The Installation Summary screen is open. All disks are available to the installation program. Procedure Select disks for installation: Click Installation Destination to open the Installation Destination window. Select the disks that you require for installation by clicking the corresponding icon. A selected disk has a check-mark displayed on it. Under Storage Configuration , select the Custom radio-button. Optional: To enable storage encryption with LUKS, select the Encrypt my data check box. Click Done . If you selected to encrypt the storage, a dialog box for entering a disk encryption passphrase opens. Type in the LUKS passphrase: Enter the passphrase in the two text fields. To switch keyboard layout, use the keyboard icon. Warning In the dialog box for entering the passphrase, you cannot change the keyboard layout. Select the English keyboard layout to enter the passphrase in the installation program. Click Save Passphrase . The Manual Partitioning window opens. Detected mount points are listed in the left-hand pane. The mount points are organized by detected operating system installations. As a result, some file systems may be displayed multiple times if a partition is shared among several installations. Select the mount points in the left pane; the options that can be customized are displayed in the right pane. Optional: If your system contains existing file systems, ensure that enough space is available for the installation. To remove any partitions, select them in the list and click the - button. The dialog has a check box that you can use to remove all other partitions used by the system to which the deleted partition belongs. Optional: If there are no existing partitions and you want to create a set of partitions as a starting point, select your preferred partitioning scheme from the left pane (default for Red Hat Enterprise Linux is LVM) and click the Click here to create them automatically link. Note A /boot partition, a / (root) volume, and a swap volume proportional to the size of the available storage are created and listed in the left pane. These are the file systems for a typical installation, but you can add additional file systems and mount points. Click Done to confirm any changes and return to the Installation Summary window. 10.4.4. Supported file systems When configuring manual partitioning, you can optimize performance, ensure compatibility, and effectively manage disk space by utilizing the various file systems and partition types available in Red Hat Enterprise Linux. xfs XFS is a highly scalable, high-performance file system that supports file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes), and directory structures containing tens of millions of entries. XFS also supports metadata journaling, which facilitates quicker crash recovery. The maximum supported size of a single XFS file system is 500 TB. XFS is the default file system on Red Hat Enterprise Linux. The XFS filesystem cannot be shrunk to get free space. ext4 The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The maximum supported size of a single ext4 file system is 50 TB. ext3 The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces the time spent recovering a file system after it terminates unexpectedly, as there is no need to check the file system for metadata consistency by running the fsck utility every time. ext2 An ext2 file system supports standard Unix file types, including regular files, directories, or symbolic links. It provides the ability to assign long file names, up to 255 characters. swap Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. vfat The VFAT file system is a Linux file system that is compatible with Microsoft Windows long file names on the FAT file system. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr and so on. BIOS Boot A very small partition required for booting from a device with a GUID partition table (GPT) on BIOS systems and UEFI systems in BIOS compatibility mode. EFI System Partition A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system. PReP This small boot partition is located on the first partition of the disk. The PReP boot partition contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 10.4.5. Adding a mount point file system You can add multiple mount point file systems. You can use any of the file systems and partition types available, such as XFS, ext4, ext3, ext2, swap, VFAT, and specific partitions like BIOS Boot, EFI System Partition, and PReP to effectively configure your system's storage. Prerequisites You have planned your partitions. Ensure you haven't specified mount points at paths with symbolic links, such as /var/mail , /usr/tmp , /lib , /sbin , /lib64 , and /bin . The payload, including RPM packages, depends on creating symbolic links to specific directories. Procedure Click + to create a new mount point file system. The Add a New Mount Point dialog opens. Select one of the preset paths from the Mount Point drop-down menu or type your own; for example, select / for the root partition or /boot for the boot partition. Enter the size of the file system in to the Desired Capacity field; for example, 2GiB . If you do not specify a value in Desired Capacity , or if you specify a size bigger than available space, then all remaining free space is used. Click Add mount point to create the partition and return to the Manual Partitioning window. 10.4.6. Configuring storage for a mount point file system You can set the partitioning scheme for each mount point that was created manually. The available options are Standard Partition , LVM , and LVM Thin Provisioning . Btfrs support has been removed in Red Hat Enterprise Linux 8. Note The /boot partition is always located on a standard partition, regardless of the value selected. Procedure To change the devices that a single non-LVM mount point should be located on, select the required mount point from the left-hand pane. Under the Device(s) heading, click Modify . The Configure Mount Point dialog opens. Select one or more devices and click Select to confirm your selection and return to the Manual Partitioning window. Click Update Settings to apply the changes. In the lower left-hand side of the Manual Partitioning window, click the storage device selected link to open the Selected Disks dialog and review disk information. Optional: Click the Rescan button (circular arrow button) to refresh all local disks and partitions; this is only required after performing advanced partition configuration outside the installation program. Clicking the Rescan Disks button resets all configuration changes made in the installation program. 10.4.7. Customizing a mount point file system You can customize a partition or volume if you want to set specific settings. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex as these directories contain critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system is unable to boot, or hangs with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories below them. For example, a separate partition for /var/www works successfully. Procedure From the left pane, select the mount point. Figure 10.1. Customizing Partitions From the right-hand pane, you can customize the following options: Enter the file system mount point into the Mount Point field. For example, if a file system is the root file system, enter / ; enter /boot for the /boot file system, and so on. For a swap file system, do not set the mount point as setting the file system type to swap is sufficient. Enter the size of the file system in the Desired Capacity field. You can use common size units such as KiB or GiB. The default is MiB if you do not set any other unit. Select the device type that you require from the drop-down Device Type menu: Standard Partition , LVM , or LVM Thin Provisioning . Note RAID is available only if two or more disks are selected for partitioning. If you choose RAID , you can also set the RAID Level . Similarly, if you select LVM , you can specify the Volume Group . Select the Encrypt check box to encrypt the partition or volume. You must set a password later in the installation program. The LUKS Version drop-down menu is displayed. Select the LUKS version that you require from the drop-down menu. Select the appropriate file system type for this partition or volume from the File system drop-down menu. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr , and so on. Select the Reformat check box to format an existing partition, or clear the Reformat check box to retain your data. The newly-created partitions and volumes must be reformatted, and the check box cannot be cleared. Type a label for the partition in the Label field. Use labels to easily recognize and address individual partitions. Type a name in the Name field. The standard partitions are named automatically when they are created and you cannot edit the names of standard partitions. For example, you cannot edit the /boot name sda1 . Click Update Settings to apply your changes and if required, select another partition to customize. Changes are not applied until you click Begin Installation from the Installation Summary window. Optional: Click Reset All to discard your partition changes. Click Done when you have created and customized all file systems and mount points. If you choose to encrypt a file system, you are prompted to create a passphrase. A Summary of Changes dialog box opens, displaying a summary of all storage actions for the installation program. Click Accept Changes to apply the changes and return to the Installation Summary window. 10.4.8. Preserving the /home directory In a Red Hat Enterprise Linux 8 graphical installation, you can preserve the /home directory that was used on your RHEL 7 system. Preserving /home is only possible if the /home directory is located on a separate /home partition on your RHEL 7 system. Preserving the /home directory that includes various configuration settings, makes it possible that the GNOME Shell environment on the new Red Hat Enterprise Linux 8 system is set in the same way as it was on your RHEL 7 system. Note that this applies only for users on Red Hat Enterprise Linux 8 with the same user name and ID as on the RHEL 7 system. Prerequisites You have RHEL 7 installed on your computer. The /home directory is located on a separate /home partition on your RHEL 7 system. The Red Hat Enterprise Linux 8 Installation Summary window is open. Procedure Click Installation Destination to open the Installation Destination window. Under Storage Configuration , select the Custom radio button. Click Done . Click Done , the Manual Partitioning window opens. Choose the /home partition, fill in /home under Mount Point: and clear the Reformat check box. Figure 10.2. Ensuring that /home is not formatted Optional: You can also customize various aspects of the /home partition required for your Red Hat Enterprise Linux 8 system as described in Customizing a mount point file system . However, to preserve /home from your RHEL 7 system, it is necessary to clear the Reformat check box. After you customized all partitions according to your requirements, click Done . The Summary of changes dialog box opens. Verify that the Summary of changes dialog box does not show any change for /home . This means that the /home partition is preserved. Click Accept Changes to apply the changes, and return to the Installation Summary window. 10.4.9. Creating a software RAID during the installation Redundant Arrays of Independent Disks (RAID) devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so that the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two disks, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. To optimize your system's storage performance and reliability, RHEL supports software RAID 0 , RAID 1 , RAID 4 , RAID 5 , RAID 6 , and RAID 10 types with LVM and LVM Thin Provisioning to set up storage on the installed system. Note On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites You have selected two or more disks for installation before RAID configuration options are visible. Depending on the RAID type you want to create, at least two disks are required. You have created a mount point. By configuring a mount point, you can configure the RAID device. You have selected the Custom radio button on the Installation Destination window. Procedure From the left pane of the Manual Partitioning window, select the required partition. Under the Device(s) section, click Modify . The Configure Mount Point dialog box opens. Select the disks that you want to include in the RAID device and click Select . Click the Device Type drop-down menu and select RAID . Click the File System drop-down menu and select your preferred file system type. Click the RAID Level drop-down menu and select your preferred level of RAID. Click Update Settings to save your changes. Click Done to apply the settings to return to the Installation Summary window. Additional resources Creating a RAID LV with DM integrity Managing RAID 10.4.10. Creating an LVM logical volume Logical Volume Manager (LVM) presents a simple logical view of underlying physical storage space, such as disks or LUNs. Partitions on physical storage are represented as physical volumes that you can group together into volume groups. You can divide each volume group into multiple logical volumes, each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. Important LVM configuration is available only in the graphical installation program. During text-mode installation, LVM configuration is not available. To create an LVM configuration, press Ctrl + Alt + F2 to use a shell prompt in a different virtual console. You can run vgcreate and lvm commands in this shell. To return to the text-mode installation, press Ctrl + Alt + F1 . Procedure From the Manual Partitioning window, create a new mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Select LVM in the drop-down menu. The Volume Group drop-down menu is displayed with the newly-created volume group name. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information about Kickstart, see the Automatically installing RHEL . Click Done to return to the Installation Summary window. Additional resources Configuring and managing logical volumes 10.4.11. Configuring an LVM logical volume You can configure a newly-created LVM logical volume based on your requirements. Warning Placing the /boot partition on an LVM volume is not supported. Procedure From the Manual Partitioning window, create a mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Click the Device Type drop-down menu and select LVM . The Volume Group drop-down menu is displayed with the newly-created volume group name. Click Modify to configure the newly-created volume group. The Configure Volume Group dialog box opens. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information, see the Automatically installing RHEL document. Optional: From the RAID Level drop-down menu, select the RAID level that you require. The available RAID levels are the same as with actual RAID devices. Select the Encrypt check box to mark the volume group for encryption. From the Size policy drop-down menu, select any of the following size policies for the volume group: The available policy options are: Automatic The size of the volume group is set automatically so that it is large enough to contain the configured logical volumes. This is optimal if you do not need free space within the volume group. As large as possible The volume group is created with maximum size, regardless of the size of the configured logical volumes it contains. This is optimal if you plan to keep most of your data on LVM and later need to increase the size of some existing logical volumes, or if you need to create additional logical volumes within this group. Fixed You can set an exact size of the volume group. Any configured logical volumes must then fit within this fixed size. This is useful if you know exactly how large you need the volume group to be. Click Save to apply the settings and return to the Manual Partitioning window. Click Update Settings to save your changes. Click Done to return to the Installation Summary window. 10.4.12. Advice on partitions There is no best way to partition every system; the optimal setup depends on how you plan to use the system being installed. However, the following tips may help you find the optimal layout for your needs: Create partitions that have specific requirements first, for example, if a particular partition must be on a specific disk. Consider encrypting any partitions and volumes which might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /home partition, which contains user data. In some cases, creating separate mount points for directories other than / , /boot and /home may be useful; for example, on a server running a MySQL database, having a separate mount point for /var/lib/mysql allows you to preserve the database during a re-installation without having to restore it from backup afterward. However, having unnecessary separate mount points will make storage administration more difficult. Some special restrictions apply to certain directories with regards to which partitioning layouts can be placed. Notably, the /boot directory must always be on a physical partition (not on an LVM volume). If you are new to Linux, consider reviewing the Linux Filesystem Hierarchy Standard for information about various system directories and their contents. Each kernel requires approximately: 60MiB (initrd 34MiB, 11MiB vmlinuz, and 5MiB System.map) For rescue mode: 100MiB (initrd 76MiB, 11MiB vmlinuz, and 5MiB System map) When kdump is enabled in system it will take approximately another 40MiB (another initrd with 33MiB) The default partition size of 1 GiB for /boot should suffice for most common use cases. However, increase the size of this partition if you are planning on retaining multiple kernel releases or errata kernels. The /var directory holds content for a number of applications, including the Apache web server, and is used by the YUM package manager to temporarily store downloaded package updates. Make sure that the partition or volume containing /var has at least 5 GiB. The /usr directory holds the majority of software on a typical Red Hat Enterprise Linux installation. The partition or volume containing this directory should therefore be at least 5 GiB for minimal installations, and at least 10 GiB for installations with a graphical environment. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories under them. For example, a separate partition for /var/www works without issues. Important Some security policies require the separation of /usr and /var , even though it makes administration more complex. Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other volumes. You can also select the LVM Thin Provisioning device type for the partition to have the unused space handled automatically by the volume. The size of an XFS file system cannot be reduced - if you need to make a partition or volume with this file system smaller, you must back up your data, destroy the file system, and create a new, smaller one in its place. Therefore, if you plan to alter your partitioning layout later, you should use the ext4 file system instead. Use Logical Volume Manager (LVM) if you anticipate expanding your storage by adding more disks or expanding virtual machine disks after the installation. With LVM, you can create physical volumes on the new drives, and then assign them to any volume group and logical volume as you see fit - for example, you can easily expand your system's /home (or any other directory residing on a logical volume). Creating a BIOS Boot partition or an EFI System Partition may be necessary, depending on your system's firmware, boot drive size, and boot drive disk label. Note that you cannot create a BIOS Boot or EFI System Partition in graphical installation if your system does not require one - in that case, they are hidden from the menu. If you need to make any changes to your storage configuration after the installation, Red Hat Enterprise Linux repositories offer several different tools which can help you do this. If you prefer a command-line tool, try system-storage-manager . Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 10.5. Selecting the base environment and additional software Use the Software Selection window to select the software packages that you require. The packages are organized by Base Environment and Additional Software. Base Environment contains predefined packages. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom operating system, Virtualization Host. The availability is dependent on the installation ISO image that is used as the installation source. Additional Software for Selected Environment contains additional software packages for the base environment. You can select multiple software packages. Use a predefined environment and additional software to customize your system. However, in a standard installation, you cannot select individual packages to install. To view the packages contained in a specific environment, see the repository /repodata/*-comps- repository . architecture .xml file on your installation source media (DVD, CD, USB). The XML file contains details of the packages installed as part of a base environment. Available environments are marked by the <environment> tag, and additional software packages are marked by the <group> tag. If you are unsure about which packages to install, select the Minimal Install base environment. Minimal install installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. After the system finishes installing and you log in for the first time, you can use the YUM package manager to install additional software. For more information about YUM package manager, see the Configuring basic system settings document. Note Use the yum group list command from any RHEL 8 system to view the list of packages being installed on the system as a part of software selection. For more information, see Configuring basic system settings . If you need to control which packages are installed, you can use a Kickstart file and define the packages in the %packages section. Prerequisites You have configured the installation source. The installation program has downloaded package metadata. The Installation Summary window is open. Procedure From the Installation Summary window, click Software Selection . The Software Selection window opens. From the Base Environment pane, select a base environment. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom Operating System, Virtualization Host. By default, the Server with GUI base environment is selected. Figure 10.3. Red Hat Enterprise Linux Software Selection From the Additional Software for Selected Environment pane, select one or more options. Click Done to apply the settings and return to graphical installations. 10.6. Optional: Configuring the network and host name Use the Network and Host name window to configure network interfaces. Options that you select here are available both during the installation for tasks such as downloading packages from a remote location, and on the installed system. Follow the steps in this procedure to configure your network and host name. Procedure From the Installation Summary window, click Network and Host Name . From the list in the left-hand pane, select an interface. The details are displayed in the right-hand pane. Toggle the ON/OFF switch to enable or disable the selected interface. You cannot add or remove interfaces manually. Click + to add a virtual network interface, which can be either: Team, Bond, Bridge, or VLAN. Click - to remove a virtual interface. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration for an existing interface (both virtual and physical). Type a host name for your system in the Host Name field. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this system, specify only the short host name. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require a short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . Click Apply to apply the host name to the installer environment. Alternatively, in the Network and Hostname window, you can choose the Wireless option. Click Select network in the right-hand pane to select your wifi connection, enter the password if required, and click Done . Additional resources For more information about network device naming standards, see Configuring and managing networking . 10.6.1. Adding a virtual network interface You can add a virtual network interface. Procedure From the Network & Host name window, click the + button to add a virtual network interface. The Add a device dialog opens. Select one of the four available types of virtual interfaces: Bond : NIC ( Network Interface Controller ) Bonding, a method to bind multiple physical network interfaces together into a single bonded channel. Bridge : Represents NIC Bridging, a method to connect multiple separate networks into one aggregate network. Team : NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. Vlan ( Virtual LAN ): A method to create multiple distinct broadcast domains which are mutually isolated. Select the interface type and click Add . An editing interface dialog box opens, allowing you to edit any available settings for your chosen interface type. For more information, see Editing network interface . Click Save to confirm the virtual interface settings and return to the Network & Host name window. Optional: To change the settings of a virtual interface, select the interface and click Configure . 10.6.2. Editing network interface configuration You can edit the configuration of a typical wired connection used during installation. Configuration of other types of networks is broadly similar, although the specific configuration parameters might be different. Note On 64-bit IBM Z, you cannot add a new connection as the network subchannels need to be grouped and set online beforehand, and this is currently done only in the booting phase. Procedure To configure a network connection manually, select the interface from the Network and Host name window and click Configure . An editing dialog specific to the selected interface opens. The options present depend on the connection type - the available options are slightly different depending on whether the connection type is a physical interface (wired or wireless network interface controller) or a virtual interface (Bond, Bridge, Team, or Vlan) that was previously configured in Adding a virtual interface . 10.6.3. Enabling or Disabling the Interface Connection You can enable or disable specific interface connections. Procedure Click the General tab. Select the Connect automatically with priority check box to enable connection by default. Keep the default priority setting at 0 . Optional: Enable or disable all users on the system from connecting to this network by using the All users may connect to this network option. If you disable this option, only root will be able to connect to this network. Important When enabled on a wired connection, the system automatically connects during startup or reboot. On a wireless connection, the interface attempts to connect to any known wireless networks in range. For further information about NetworkManager, including the nm-connection-editor tool, see the Configuring and managing networking document. Click Save to apply the changes and return to the Network and Host name window. It is not possible to only allow a specific user other than root to use this interface, as no other users are created at this point during the installation. If you need a connection for a different user, you must configure it after the installation. 10.6.4. Setting up Static IPv4 or IPv6 Settings By default, both IPv4 and IPv6 are set to automatic configuration depending on current network settings. This means that addresses such as the local IP address, DNS address, and other settings are detected automatically when the interface connects to a network. In many cases, this is sufficient, but you can also provide static configuration in the IPv4 Settings and IPv6 Settings tabs. Complete the following steps to configure IPv4 or IPv6 settings: Procedure To set static network configuration, navigate to one of the IPv Settings tabs and from the Method drop-down menu, select a method other than Automatic , for example, Manual . The Addresses pane is enabled. Optional: In the IPv6 Settings tab, you can also set the method to Ignore to disable IPv6 on this interface. Click Add and enter your address settings. Type the IP addresses in the Additional DNS servers field; it accepts one or more IP addresses of DNS servers, for example, 10.0.0.1,10.0.0.8 . Select the Require IPv X addressing for this connection to complete check box. Selecting this option in the IPv4 Settings or IPv6 Settings tabs allow this connection only if IPv4 or IPv6 was successful. If this option remains disabled for both IPv4 and IPv6, the interface is able to connect if configuration succeeds on either IP protocol. Click Save to apply the changes and return to the Network & Host name window. 10.6.5. Configuring Routes You can control the access of specific connections by configuring routes. Procedure In the IPv4 Settings and IPv6 Settings tabs, click Routes to configure routing settings for a specific IP protocol on an interface. An editing routes dialog specific to the interface opens. Click Add to add a route. Select the Ignore automatically obtained routes check box to configure at least one static route and to disable all routes not specifically configured. Select the Use this connection only for resources on its network check box to prevent the connection from becoming the default route. This option can be selected even if you did not configure any static routes. This route is used only to access certain resources, such as intranet pages that require a local or VPN connection. Another (default) route is used for publicly available resources. Unlike the additional routes configured, this setting is transferred to the installed system. This option is useful only when you configure more than one interface. Click OK to save your settings and return to the editing routes dialog that is specific to the interface. Click Save to apply the settings and return to the Network and Host Name window. 10.7. Optional: Configuring the keyboard layout You can configure the keyboard layout from the Installation Summary screen. Important If you use a layout that cannot accept Latin characters, such as Russian , add the English (United States) layout and configure a keyboard combination to switch between the two layouts. If you select a layout that does not have Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This might prevent you from completing the installation. Procedure From the Installation Summary window, click Keyboard . Click + to open the Add a Keyboard Layout window to change to a different layout. Select a layout by browsing the list or use the Search field. Select the required layout and click Add . The new layout appears under the default layout. Click Options to optionally configure a keyboard switch that you can use to cycle between available layouts. The Layout Switching Options window opens. To configure key combinations for switching, select one or more key combinations and click OK to confirm your selection. Optional: When you select a layout, click the Keyboard button to open a new dialog box displaying a visual representation of the selected layout. Click Done to apply the settings and return to graphical installations. 10.8. Optional: Configuring the language support You can change the language settings from the Installation Summary screen. Procedure From the Installation Summary window, click Language Support . The Language Support window opens. The left pane lists the available language groups. If at least one language from a group is configured, a check mark is displayed and the supported language is highlighted. From the left pane, click a group to select additional languages, and from the right pane, select regional options. Repeat this process for all the languages that you want to configure. Optional: Search the language group by typing in the text box, if required. Click Done to apply the settings and return to graphical installations. 10.9. Optional: Configuring the date and time-related settings You can configure the date and time-related settings from the Installation Summary screen. Procedure From the Installation Summary window, click Time & Date . The Time & Date window opens. The list of cities and regions come from the Time Zone Database ( tzdata ) public domain that is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat can not add cities or regions to this database. You can find more information at the IANA official website . From the Region drop-down menu, select a region. Select Etc as your region to configure a time zone relative to Greenwich Mean Time (GMT) without setting your location to a specific region. From the City drop-down menu, select the city, or the city closest to your location in the same time zone. Toggle the Network Time switch to enable or disable network time synchronization using the Network Time Protocol (NTP). Enabling the Network Time switch keeps your system time correct as long as the system can access the internet. By default, one NTP pool is configured. Optional: Use the gear wheel button to the Network Time switch to add a new NTP, or disable or remove the default options. Click Done to apply the settings and return to graphical installations. Optional: Disable the network time synchronization to activate controls at the bottom of the page to set time and date manually. 10.10. Optional: Subscribing the system and activating Red Hat Insights Red Hat Insights is a Software-as-a-Service (SaaS) offering that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to security, performance and stability across physical, virtual and cloud environments, and container deployments. By registering your RHEL system in Red Hat Insights, you gain access to predictive analytics, security alerts, and performance optimization tools, enabling you to maintain a secure, efficient, and stable IT environment. You can register to Red Hat by using either your Red Hat account or your activation key details. You can connect your system to Red hat Insights by using the Connect to Red Hat option. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Select Account or Activation Key . If you select Account , enter your Red Hat Customer Portal username and password details. If you select Activation Key , enter your organization ID and activation key. You can enter more than one activation key, separated by a comma, as long as the activation keys are registered to your subscription. Select the Set System Purpose check box. If the account has Simple content access mode enabled, setting the system purpose values is still important for accurate reporting of consumption in the subscription services. If your account is in the entitlement mode, system purpose enables the entitlement server to determine and automatically attach the most appropriate subscription to satisfy the intended use of the Red Hat Enterprise Linux 8 system. Select the required Role , SLA , and Usage from the corresponding drop-down lists. The Connect to Red Hat Insights check box is enabled by default. Clear the check box if you do not want to connect to Red Hat Insights. Optional: Expand Options . Select the Use HTTP proxy check box if your network environment only allows external Internet access or access to content servers through an HTTP proxy. Clear the Use HTTP proxy check box if an HTTP proxy is not used. If you are running Satellite Server or performing internal testing, select the Custom Server URL and Custom base URL check boxes and enter the required details. Important The Custom Server URL field does not require the HTTP protocol, for example nameofhost.com . However, the Custom base URL field requires the HTTP protocol. To change the Custom base URL after registration, you must unregister, provide the new details, and then re-register. Click Register to register the system. When the system is successfully registered and subscriptions are attached, the Connect to Red Hat window displays the attached subscription details. Depending on the amount of subscriptions, the registration and attachment process might take up to a minute to complete. Click Done to return to the Installation Summary window. A Registered message is displayed under Connect to Red Hat . Additional resources About Red Hat Insights 10.11. Optional: Using network-based repositories for the installation You can configure an installation source from either auto-detected installation media, Red Hat CDN, or the network. When the Installation Summary window first opens, the installation program attempts to configure an installation source based on the type of media that was used to boot the system. The full Red Hat Enterprise Linux Server DVD configures the source as local media. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have created bootable installation media. The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Source . The Installation Source window opens. Review the Auto-detected installation media section to verify the details. This option is selected by default if you started the installation program from media containing an installation source, for example, a DVD. Click Verify to check the media integrity. Review the Additional repositories section and note that the AppStream check box is selected by default. The BaseOS and AppStream repositories are installed as part of the full installation image. Do not disable the AppStream repository check box if you want a full Red Hat Enterprise Linux 8 installation. Optional: Select the Red Hat CDN option to register your system, attach RHEL subscriptions, and install RHEL from the Red Hat Content Delivery Network (CDN). Optional: Select the On the network option to download and install packages from a network location instead of local media. This option is available only when a network connection is active. See Configuring network and host name options for information about how to configure network connections in the GUI. Note If you do not want to download and install additional repositories from a network location, proceed to Configuring software selection . Select the On the network drop-down menu to specify the protocol for downloading packages. This setting depends on the server that you want to use. Type the server address (without the protocol) into the address field. If you choose NFS, a second input field opens where you can specify custom NFS mount options . This field accepts options listed in the nfs(5) man page on your system. When selecting an NFS installation source, specify the address with a colon ( : ) character separating the host name from the path. For example, server.example.com:/path/to/directory . The following steps are optional and are only required if you use a proxy for network access. Click Proxy setup to configure a proxy for an HTTP or HTTPS source. Select the Enable HTTP proxy check box and type the URL into the Proxy Host field. Select the Use Authentication check box if the proxy server requires authentication. Type in your user name and password. Click OK to finish the configuration and exit the Proxy Setup... dialog box. Note If your HTTP or HTTPS URL refers to a repository mirror, select the required option from the URL type drop-down list. All environments and additional software packages are available for selection when you finish configuring the sources. Click + to add a repository. Click - to delete a repository. Click the arrow icon to revert the current entries to the setting when you opened the Installation Source window. To activate or deactivate a repository, click the check box in the Enabled column for each entry in the list. You can name and configure your additional repository in the same way as the primary repository on the network. Click Done to apply the settings and return to the Installation Summary window. 10.12. Optional: Configuring Kdump kernel crash-dumping mechanism Kdump is a kernel crash-dumping mechanism. In the event of a system crash, Kdump captures the contents of the system memory at the moment of failure. This captured memory can be analyzed to find the cause of the crash. If Kdump is enabled, it must have a small portion of the system's memory (RAM) reserved to itself. This reserved memory is not accessible to the main kernel. Procedure From the Installation Summary window, click Kdump . The Kdump window opens. Select the Enable kdump check box. Select either the Automatic or Manual memory reservation setting. If you select Manual , enter the amount of memory (in megabytes) that you want to reserve in the Memory to be reserved field using the + and - buttons. The Usable System Memory readout below the reservation input field shows how much memory is accessible to your main system after reserving the amount of RAM that you select. Click Done to apply the settings and return to graphical installations. The amount of memory that you reserve is determined by your system architecture (AMD64 and Intel 64 have different requirements than IBM Power) as well as the total amount of system memory. In most cases, automatic reservation is satisfactory. Additional settings, such as the location where kernel crash dumps will be saved, can only be configured after the installation using either the system-config-kdump graphical interface, or manually in the /etc/kdump.conf configuration file. 10.13. Optional: Selecting a security profile You can apply security policy during your Red Hat Enterprise Linux 8 installation and configure it to use on your system before the first boot. 10.13.1. About security policy The Red Hat Enterprise Linux includes OpenSCAP suite to enable automated configuration of the system in alignment with a particular security policy. The policy is implemented using the Security Content Automation Protocol (SCAP) standard. The packages are available in the AppStream repository. However, by default, the installation and post-installation process does not enforce any policies and therefore does not involve any checks unless specifically configured. Applying a security policy is not a mandatory feature of the installation program. If you apply a security policy to the system, it is installed using restrictions defined in the profile that you selected. The openscap-scanner and scap-security-guide packages are added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. When you select a security policy, the Anaconda GUI installer requires the configuration to adhere to the policy's requirements. There might be conflicting package selections, as well as separate partitions defined. Only after all the requirements are met, you can start the installation. At the end of the installation process, the selected OpenSCAP security policy automatically hardens the system and scans it to verify compliance, saving the scan results to the /root/openscap_data directory on the installed system. By default, the installer uses the content of the scap-security-guide package bundled in the installation image. You can also load external content from an HTTP, HTTPS, or FTP server. 10.13.2. Configuring a security profile You can configure a security policy from the Installation Summary window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Security Profile . The Security Profile window opens. To enable security policies on the system, toggle the Apply security policy switch to ON . Select one of the profiles listed in the top pane. Click Select profile . Profile changes that you must apply before installation appear in the bottom pane. Click Change content to use a custom profile. A separate window opens allowing you to enter a URL for valid security content. Click Fetch to retrieve the URL. You can load custom profiles from an HTTP , HTTPS , or FTP server. Use the full address of the content including the protocol, such as http:// . A network connection must be active before you can load a custom profile. The installation program detects the content type automatically. Click Use SCAP Security Guide to return to the Security Profile window. Click Done to apply the settings and return to the Installation Summary window. 10.13.3. Profiles not compatible with Server with GUI Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 10.2. Profiles not compatible with Server with GUI Profile name Profile ID Justification Notes CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. DISA STIG for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. To install a RHEL system as a Server with GUI aligned with DISA STIG in RHEL version 8.4 and later, you can use the DISA STIG with GUI profile. 10.13.4. Deploying baseline-compliant RHEL systems using Kickstart You can deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites The scap-security-guide package is installed on your RHEL 8 system. Procedure Open the /usr/share/scap-security-guide/kickstart/ssg-rhel8-ospp-ks.cfg Kickstart file in an editor of your choice. Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit must be preserved, and you can only change the size of the partitions. Start a Kickstart installation as described in Performing an automated installation using Kickstart . Important Passwords in Kickstart files are not checked for OSPP requirements. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources OSCAP Anaconda Add-on Kickstart commands and options reference: %addon org_fedora_oscap 10.13.5. Additional resources scap-security-guide(8) - The manual page for the scap-security-guide project contains information about SCAP security profiles, including examples on how to utilize the provided benchmarks using the OpenSCAP utility. Red Hat Enterprise Linux security compliance information is available in the Security hardening document.
[ "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/customizing-the-system-in-the-installer_rhel-installer
function::ansi_new_line
function::ansi_new_line Name function::ansi_new_line - Move cursor to new line. Synopsis Arguments None Description Sends ansi code new line.
[ "ansi_new_line()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ansi-new-line
19.5. Bug Fixes
19.5. Bug Fixes The ld linker generates correct dynamic executables Previously, the ld linker failed to create correct dynamic executables and terminated when invoked by the Go language compiler go on the 64-bit ARM architecture. The linker has been updated to correctly handle copy relocations. As a result, the linker no longer fails in the described situation. (BZ# 1430743 ) The ld linker generates correct dynamic relocations for constant data Previously, the ld linker generated an incorrect kind of dynamic relocations for constant data shared between a library and executable on the 64-bit ARM architecture. As a consequence, the produced executable files wasted resources and terminated unexpectedly when the shared data was accessed. The linker has been updated to generate correct dynamic relocations, and the described problem no longer occurs. (BZ#1452170) qrwlock is now enabled for 64-bit ARM systems This update introduces the qrwlock queued read-write lock for 64-bit ARM systems. The implementation of this mechanism improves performance and prevents lock starvation by ensuring fair handling of multiple CPUs competing for the global task lock. This change also resolves a known issue tracked in Red Hat Bugzilla #1454844, which was present in earlier releases and which caused soft lockups under heavy load. Note that any kernel modules built for versions of Red Hat Enterprise Linux 7 for ARM (against the kernel-alt packages) must be rebuilt against the updated kernel. CMA disabled by default On 64-bit ARM Red Hat Enterprise Linux systems with memory limited to 1G or below, the Contiguous Memory Allocator (CMA) consumed large amount of memory, thus leaving insufficient memory for the rest of the kernel. Consequently, the out-of-memory (OOM) condition sometimes occurred in the kernel or certain user-space applications, such as Shared Memory in Linux (SHM)(/dev/shm). The CMA support in the Red Hat Enterprise Linux kernel is now disabled by default for all architectures, and CMA no longer causes OOM.(BZ# 1519317 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/arm-bug-fixes
Appendix C. Migrating Databases and Services to a Remote Server
Appendix C. Migrating Databases and Services to a Remote Server Although you cannot configure remote databases and services during the automated installation, you can migrate them to a separate server post-installation. C.1. Migrating the Data Warehouse to a Separate Machine This section describes how to migrate the Data Warehouse database and service from the Red Hat Virtualization Manager machine to a separate machine. Hosting the Data Warehouse service on a separate machine reduces the load on each individual machine, and avoids potential conflicts caused by sharing CPU and memory resources with other processes. Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. You have the following migration options: You can migrate the Data Warehouse service away from the Manager machine and connect it with the existing Data Warehouse database ( ovirt_engine_history ). You can migrate the Data Warehouse database away from the Manager machine and then migrate the Data Warehouse service. C.1.1. Migrating the Data Warehouse Database to a Separate Machine Migrate the Data Warehouse database ( ovirt_engine_history ) before you migrate the Data Warehouse service. Use engine-backup to create a database backup and restore it on the new database machine. For more information on engine-backup , run engine-backup --help . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. The new database server must have Red Hat Enterprise Linux 8 installed. Enable the required repositories on the new database server. C.1.1.1. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Data Warehouse machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Update the Self-Hosted Engine using the procedure Updating a Self-Hosted Engine in the Upgrade Guide . Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream C.1.1.2. Migrating the Data Warehouse Database to a Separate Machine Procedure Create a backup of the Data Warehouse database and configuration files on the Manager: # engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file= file_name --log= log_file_name Copy the backup file from the Manager to the new machine: # scp /tmp/file_name [email protected]:/tmp Install engine-backup on the new machine: # dnf install ovirt-engine-tools-backup Install the PostgreSQL server package: # dnf install postgresql-server postgresql-contrib Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot: Restore the Data Warehouse database on the new machine. file_name is the backup file copied from the Manager. # engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db When the --provision-* option is used in restore mode, --restore-permissions is applied by default. The Data Warehouse database is now hosted on a separate machine from that on which the Manager is hosted. After successfully restoring the Data Warehouse database, a prompt instructs you to run the engine-setup command. Before running this command, migrate the Data Warehouse service. C.1.2. Migrating the Data Warehouse Service to a Separate Machine You can migrate the Data Warehouse service installed and configured on the Red Hat Virtualization Manager to a separate machine. Hosting the Data Warehouse service on a separate machine helps to reduce the load on the Manager machine. Notice that this procedure migrates the Data Warehouse service only. To migrate the Data Warehouse database ( ovirt_engine_history ) prior to migrating the Data Warehouse service, see Migrating the Data Warehouse Database to a Separate Machine . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. Prerequisites You must have installed and configured the Manager and Data Warehouse on the same machine. To set up the new Data Warehouse machine, you must have the following: The password from the Manager's /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. Allowed access from the Data Warehouse machine to the Manager database machine's TCP port 5432. The username and password for the Data Warehouse database from the Manager's /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file. If you migrated the ovirt_engine_history database using the procedures described in Migrating the Data Warehouse Database to a Separate Machine , the backup includes these credentials, which you defined during the database setup on that machine. Installing this scenario requires four steps: Setting up the New Data Warehouse Machine Stopping the Data Warehouse service on the Manager machine Configuring the new Data Warehouse machine Disabling the Data Warehouse package on the Manager machine C.1.2.1. Setting up the New Data Warehouse Machine Enable the Red Hat Virtualization repositories and install the Data Warehouse setup package on a Red Hat Enterprise Linux 8 machine: Enable the required repositories: Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Ensure that all packages currently installed are up to date: # dnf upgrade --nobest Install the ovirt-engine-dwh-setup package: # dnf install ovirt-engine-dwh-setup C.1.2.2. Stopping the Data Warehouse Service on the Manager Machine Procedure Stop the Data Warehouse service: # systemctl stop ovirt-engine-dwhd.service If the database is hosted on a remote machine, you must manually grant access by editing the postgres.conf file. Edit the /var/lib/pgsql/data/postgresql.conf file and modify the listen_addresses line so that it matches the following: listen_addresses = '*' If the line does not exist or has been commented out, add it manually. If the database is hosted on the Manager machine and was configured during a clean setup of the Red Hat Virtualization Manager, access is granted by default. Restart the postgresql service: # systemctl restart postgresql C.1.2.3. Configuring the New Data Warehouse Machine The order of the options or settings shown in this section may differ depending on your environment. If you are migrating both the ovirt_engine_history database and the Data Warehouse service to the same machine, run the following, otherwise proceed to the step. # sed -i '/^ENGINE_DB_/d' \ /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf # sed -i \ -e 's;^\(OVESETUP_ENGINE_CORE/enable=bool\):True;\1:False;' \ -e '/^OVESETUP_CONFIG\/fqdn/d' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Remove the apache/grafana PKI files, so that they are regenerated by engine-setup with correct values: Run the engine-setup command to begin configuration of Data Warehouse on the machine: # engine-setup Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter : Host fully qualified DNS name of this server [ autodetected host name ]: Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings: Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Enter the fully qualified domain name and password for the Manager. Press Enter to accept the default values in each other field: Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password Enter the FQDN and password for the Manager database machine. Press Enter to accept the default values in each other field: Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password Confirm your installation settings: Please confirm installation settings (OK, Cancel) [OK]: The Data Warehouse service is now configured on the remote machine. Proceed to disable the Data Warehouse service on the Manager machine. C.1.2.4. Disabling the Data Warehouse Service on the Manager Machine Prerequisites The Grafana service on the Manager machine is disabled: # systemctl disable --now grafana-server.service Procedure On the Manager machine, restart the Manager: # service ovirt-engine restart Run the following command to modify the file /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf and set the options to False : # sed -i \ -e 's;^\(OVESETUP_DWH_CORE/enable=bool\):True;\1:False;' \ -e 's;^\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf # sed -i \ -e 's;^\(OVESETUP_GRAFANA_CORE/enable=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Disable the Data Warehouse service: # systemctl disable ovirt-engine-dwhd.service Remove the Data Warehouse files: # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/* The Data Warehouse service is now hosted on a separate machine from the Manager.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms", "subscription-manager release --set=8.6", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file= file_name --log= log_file_name", "scp /tmp/file_name [email protected]:/tmp", "dnf install ovirt-engine-tools-backup", "dnf install postgresql-server postgresql-contrib", "su - postgres -c 'initdb' systemctl enable postgresql systemctl start postgresql", "engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms subscription-manager release --set=8.6", "dnf module -y enable pki-deps", "dnf upgrade --nobest", "dnf install ovirt-engine-dwh-setup", "systemctl stop ovirt-engine-dwhd.service", "listen_addresses = '*'", "systemctl restart postgresql", "sed -i '/^ENGINE_DB_/d' /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf sed -i -e 's;^\\(OVESETUP_ENGINE_CORE/enable=bool\\):True;\\1:False;' -e '/^OVESETUP_CONFIG\\/fqdn/d' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "rm -f /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache-grafana.cer /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache-grafana.key.nopass /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ovirt-engine/apache-grafana-ca.pem", "engine-setup", "Host fully qualified DNS name of this server [ autodetected host name ]:", "Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:", "Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password", "Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password", "Please confirm installation settings (OK, Cancel) [OK]:", "systemctl disable --now grafana-server.service", "service ovirt-engine restart", "sed -i -e 's;^\\(OVESETUP_DWH_CORE/enable=bool\\):True;\\1:False;' -e 's;^\\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf sed -i -e 's;^\\(OVESETUP_GRAFANA_CORE/enable=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "systemctl disable ovirt-engine-dwhd.service", "rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/*" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Migrating_to_remote_servers_SHE_cli_deploy
Chapter 2. Maintenance support
Chapter 2. Maintenance support 2.1. Maintenance support for JBoss EAP XP When a new JBoss EAP XP major version is released, maintenance support for the major version begins. Maintenance support usually lasts for 12 weeks. If you use a JBoss EAP XP major version that is outside of its maintenance support length, you might experience issues as the development of security patches and bug fixes no longer apply. To avoid such issues, upgrade to the newest JBoss EAP XP major version release that is compatible with your JBoss EAP version. Additional resources For information about maintenance support, see the Red Hat JBoss Enterprise Application Platform expansion pack (JBoss EAP XP or EAP XP) Life Cycle and Support Policies located on the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_4.0.0_release_notes/maintenance_support
Chapter 4. Migrating to KRaft mode
Chapter 4. Migrating to KRaft mode If you are using ZooKeeper for metadata management of your Kafka cluster, you can migrate to using Kafka in KRaft mode. KRaft mode replaces ZooKeeper for distributed coordination, offering enhanced reliability, scalability, and throughput. To migrate your cluster, do as follows: Install a quorum of controller nodes to replace ZooKeeper for cluster management. Enable KRaft migration in the controller configuration by setting the zookeeper.metadata.migration.enable property to true . Start the controllers and enable KRaft migration on the current cluster brokers using the same configuration property. Perform a rolling restart of the brokers to apply the configuration changes. When migration is complete, switch the brokers to KRaft mode and disable migration on the controllers. Important Once KRaft mode has been finalized, rollback to ZooKeeper is not possible. Carefully consider this before proceeding with the migration. Before starting the migration, verify that your environment can support Kafka in KRaft mode: Migration is only supported on dedicated controller nodes, not on nodes with dual roles as brokers and controllers. Throughout the migration process, ZooKeeper and KRaft controller nodes operate in parallel, requiring sufficient compute resources in your cluster. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Streams for Apache Kafka is installed on each host , and the configuration files are available. You are using Streams for Apache Kafka 2.7 or newer with Kafka 3.7.0 or newer. If you are using an earlier version of Streams for Apache Kafka, upgrade before migrating to KRaft mode. Logging is enabled to check the migration process. Set DEBUG level in log4j.properties for the root logger on the controllers and brokers in the cluster. For detailed migration-specific logs, set TRACE for the migration logger: Controller logging configuration log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE Procedure Retrieve the cluster ID of your Kafka cluster. Use the zookeeper-shell tool: /opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /cluster/id The command returns the cluster ID. Install a KRaft controller quorum to the cluster. Configure a controller node on each host using the controller.properties file. At a minimum, each controller requires the following configuration: A unique node ID The migration enabled flag set to true ZooKeeper connection details Listener name used by the controller quorum A quorum of controller voters Listener name for inter-broker communication Example controller configuration process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 inter.broker.listener.name=PLAINTEXT The format for the controller quorum is <node_id>@<hostname>:<port> in a comma-separated list. The inter-broker listener name is required for the KRaft controller to initiate the migration. Set up log directories for each controller node: /opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/controller.properties Returns: Formatting /tmp/kraft-controller-logs Replace <uuid> with the cluster ID you retrieved. Use the same cluster ID for each controller node in your cluster. By default, the log directory ( log.dirs ) specified in the controller.properties configuration file is set to /tmp/kraft-controller-logs . The /tmp directory is typically cleared on each system reboot, making it suitable for development environments only. Set multiple log directories using a comma-separated list, if needed. Start each controller. /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka /opt/kafka/config/kraft/controller.properties Check the logs of each controller to ensure that they have successfully joined the KRaft cluster: tail -f /opt/kafka/logs/controller.log Enable migration on each broker. If running, stop the Kafka broker running on the host. /opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka If using a multi-node cluster, refer to Section 3.6, "Performing a graceful rolling restart of Kafka brokers" . Enable migration using the server.properties file. At a minimum, each broker requires the following additional configuration: Inter-broker protocol version set to version 3.7 The migration enabled flag Controller configuration that matches the controller nodes A quorum of controller voters Example broker configuration broker.id=0 inter.broker.protocol.version=3.7 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 The ZooKeeper connection details should already be present. Restart the updated broker: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties The migration starts automatically and can take some time depending on the number of topics and partitions in the cluster. Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka /opt/kafka/config/kraft/server.properties Check the log on the active controller to confirm that the migration is complete: /opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /controller Look for an INFO log entry that says the following: Completed migration of metadata from ZooKeeper to KRaft. Switch each broker to KRaft mode. Stop the broker, as before. Update the broker configuration in the server.properties file: Replace the broker.id with a node.id using the same ID Add a broker KRaft role for the broker Remove the inter-broker protocol version ( inter.broker.protocol.version ) Remove the migration enabled flag ( zookeeper.metadata.migration.enable ) Remove ZooKeeper configuration Remove the listener for controller and broker communication ( control.plane.listener.name ) Example broker configuration for KRaft node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 If you are using ACLS in your broker configuration, update the authorizer using the authorizer.class.name property to the KRaft-based standard authorizer. ZooKeeper-based brokers use authorizer.class.name=kafka.security.authorizer.AclAuthorizer . When migrating to KRaft-based brokers, specify authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer . Restart the broker, as before. Switch each controller out of migration mode. Stop the controller in the same way as the broker, as described previously. Update the controller configuration in the controller.properties file: Remove the ZooKeeper connection details Remove the zookeeper.metadata.migration.enable property Remove inter.broker.listener.name Example controller configuration following migration process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 Restart the controller in the same way as the broker, as described previously.
[ "log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE", "/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /cluster/id", "process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 inter.broker.listener.name=PLAINTEXT", "/opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/controller.properties", "Formatting /tmp/kraft-controller-logs", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties", "jcmd | grep kafka", "process ID kafka.Kafka /opt/kafka/config/kraft/controller.properties", "tail -f /opt/kafka/logs/controller.log", "/opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka", "broker.id=0 inter.broker.protocol.version=3.7 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties", "jcmd | grep kafka", "process ID kafka.Kafka /opt/kafka/config/kraft/server.properties", "/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /controller", "node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090", "process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/proc-migrating-kafka-to-kraft-str
19.9. Help and Information Options
19.9. Help and Information Options This section provides information about help and information options. Help -h -help Version -version Audio Help -audio-help
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sec-qemu_kvm_whitelist_help
Chapter 8. ResourceQuota [v1]
Chapter 8. ResourceQuota [v1] Description ResourceQuota sets aggregate quota restrictions enforced per namespace Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ResourceQuotaSpec defines the desired hard limits to enforce for Quota. status object ResourceQuotaStatus defines the enforced hard limits and observed use. 8.1.1. .spec Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 8.1.2. .spec.scopeSelector Description A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 8.1.3. .spec.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 8.1.4. .spec.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required scopeName operator Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Possible enum values: - "DoesNotExist" - "Exists" - "In" - "NotIn" scopeName string The name of the scope that the selector applies to. Possible enum values: - "BestEffort" Match all pod objects that have best effort quality of service - "CrossNamespacePodAffinity" Match all pod objects that have cross-namespace pod (anti)affinity mentioned. - "NotBestEffort" Match all pod objects that do not have best effort quality of service - "NotTerminating" Match all pod objects where spec.activeDeadlineSeconds is nil - "PriorityClass" Match all pod objects that have priority class mentioned - "Terminating" Match all pod objects where spec.activeDeadlineSeconds >=0 values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.5. .status Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 8.2. API endpoints The following API endpoints are available: /api/v1/resourcequotas GET : list or watch objects of kind ResourceQuota /api/v1/watch/resourcequotas GET : watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/resourcequotas DELETE : delete collection of ResourceQuota GET : list or watch objects of kind ResourceQuota POST : create a ResourceQuota /api/v1/watch/namespaces/{namespace}/resourcequotas GET : watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/resourcequotas/{name} DELETE : delete a ResourceQuota GET : read the specified ResourceQuota PATCH : partially update the specified ResourceQuota PUT : replace the specified ResourceQuota /api/v1/watch/namespaces/{namespace}/resourcequotas/{name} GET : watch changes to an object of kind ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/resourcequotas/{name}/status GET : read status of the specified ResourceQuota PATCH : partially update status of the specified ResourceQuota PUT : replace status of the specified ResourceQuota 8.2.1. /api/v1/resourcequotas Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ResourceQuota Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ResourceQuotaList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/resourcequotas Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/resourcequotas Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ResourceQuota Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ResourceQuota Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK ResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ResourceQuota Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body ResourceQuota schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 202 - Accepted ResourceQuota schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/resourcequotas Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/resourcequotas/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the ResourceQuota namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ResourceQuota Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 202 - Accepted ResourceQuota schema 401 - Unauthorized Empty HTTP method GET Description read the specified ResourceQuota Table 8.23. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ResourceQuota Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ResourceQuota Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body ResourceQuota schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/resourcequotas/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the ResourceQuota namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /api/v1/namespaces/{namespace}/resourcequotas/{name}/status Table 8.33. Global path parameters Parameter Type Description name string name of the ResourceQuota namespace string object name and auth scope, such as for teams and projects Table 8.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ResourceQuota Table 8.35. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ResourceQuota Table 8.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.37. Body parameters Parameter Type Description body Patch schema Table 8.38. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ResourceQuota Table 8.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.40. Body parameters Parameter Type Description body ResourceQuota schema Table 8.41. HTTP responses HTTP code Reponse body 200 - OK ResourceQuota schema 201 - Created ResourceQuota schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/resourcequota-v1
Chapter 3. Managing SCAP security policies in the Insights for RHEL compliance service
Chapter 3. Managing SCAP security policies in the Insights for RHEL compliance service Create and manage your SCAP security policies entirely within the compliance service UI. Define new policies and select the rules and systems you want to associate with them, and edit existing policies as your requirements change. Important Unlike most other Red Hat Insights for Red Hat Enterprise Linux services, the compliance service does not run automatically on a default schedule. In order to upload OpenSCAP data to the Insights for Red Hat Enterprise Linux application, you must run insights-client --compliance , either on-demand or on a scheduled job that you set. Additional resources How do I set up recurring uploads for Insights services? 3.1. Creating new SCAP policies You must add each Insights for Red Hat Enterprise Linux-registered system to one or more security policies before you can perform a scan or see results for that scan in the compliance service UI. To create a new policy, and include specific systems and rules, complete the following steps: Important If your RHEL servers span across multiple major releases of RHEL, you must create a separate policy for each major release. For example, all of your RHEL 7 servers would be on one Standard System Security Profile for RHEL policy and all of your RHEL 8 servers will be on another. Procedure Navigate to the Security > Compliance > SCAP Policies page. Click the Create new policy button. On the Create SCAP policy page of the wizard, select the RHEL major version of the systems you will include in the policy. Select one of the policy types available for that RHEL major version, then click . On the Details page, accept the name and description already provided or provide your own more meaningful entries. Optionally, add a Business objective to give context, for example, "CISO mandate." Define a compliance threshold acceptable for your requirements and click . Select the Systems to include on this policy and click . Your selection of a RHEL major version in the first step automatically determines which systems can be added to this policy. Select which Rules to include with each policy. Because each minor version of RHEL supports the use of a specific SCAP Security Guide (SSG) version (sometimes more than one, in which case we use the latest), the rule set for each RHEL minor version is slightly different and must be selected separately. Optionally, use the filtering and search capabilities to refine the list of rules. For example, to show only the highest severity rules, click the primary filter dropdown and select Severity . In the secondary filter, check the boxes for High and Medium . The rules shown by default are those designated for that policy type and that version of SSG. By default, the Selected only toggle to the filter boxes is enabled. You may remove this toggle if so desired. Repeat this process as needed for each RHEL minor version tab . After you select rules for each Red Hat Enterprise Linux minor version SSG, click . On the Review page, verify that the information shown is correct, then click Finish . Give the app a minute to create the policy, then click the Return to application button to view your new policy. Note You have to go to the system and run the compliance scan before results will be shown in the compliance service UI. 3.2. Editing compliance policies After creating a compliance policy, you can later edit the policy to change the policy details, or which rules or systems are included. Use the following procedures to edit a policy to suit the needs of your organization. User Access Note Editing the included rules and systems in a policy requires that a user be a member of a User Access Group with the Compliance adminstrator role. The Compliance admistrator role includes enhanced permissions that are not granted by default to all Insights for Red Hat Enterprise Linux users. 3.2.1. Editing policy details Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. Procedure Navigate to the Security > Compliance > SCAP policies page. Locate the policy you want to edit. Click on the policy name. This opens the policy details view. Wherever you see a pencil icon, you can click on the icon to edit the details in that field. Editable fields include Compliance threshold Business objective Policy description After you edit a field, click the blue checkmark to the right of the field to save your input. 3.2.2. Editing included rules Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have Compliance administrator User Access permissions. Procedure Navigate to the Security > Compliance > SCAP policies page. Locate the policy you want to edit. On the right side of the policy row, click the More actions icon, , and click Edit policy . In the Edit popup, click the Rules tab. Click on a RHEL minor version. Important Because a different SCAP Security Guide (SSG) version exists for each minor version of RHEL, you must edit the rules for each minor version of RHEL separately. Use the Name filter and search function to locate the rules to remove. Note With the Name primary filter selected, you can search by the rule name or its identifier. Uncheck the box to any rule you want to remove. Or , check the box to any rule you want to add. Repeat these steps for each RHEL minor version tab. Click Save . Verification Navigate to the Security > Compliance > SCAP policies page and locate the edited policy. Click on the policy and verify that the included rules are consistent with the edits you made. 3.2.3. Editing included systems Navigate to the Security > Compliance > SCAP policies page. Locate the policy you want to edit. On the right side of the policy row, click the More actions icon, , and click Edit policy . In the Edit popup, click the Systems tab. A list of all available systems is displayed. Systems that are already included in the policy have a checkmark in the box to the left side of the system name. Systems without a checkmark to the system name are not included in this policy. Search for a system by name. To include that system in the policy, check the box to the system name. Or , to remove the system from the policy, uncheck the box to the system name. Click Save to save your changes. Verification Navigate to the Security > Compliance > SCAP policies page and locate the edited policy. Click on the policy and verify that the included systems are consistent with the edits you made. 3.3. Viewing SCAP policies using the insights-client command After you have registered your system to Insights, you can view all available compliance policies for that system using the insights-client --compliance-policies command. Prerequisites The Insights client is installed on the system. You are logged in to a system where you have root privileges. Procedure At the command line, enter: This command displays a list of compliance policies that are supported for the system. The output shows the ID and Title for the policies, and whether the policies are Assigned (shows a value of TRUE or FALSE that indicates whether the policy is assigned to the system or not). Additional Resources For more information about the insights-client --compliance options, see the Client Configuration Guide for Red Hat Insights . 3.4. Assign systems to SCAP policies using the insights-client command You can assign (add) systems to SCAP policies using the insights-client --compliance-assign command. This command option provides you the ability to create custom automation for working with your systems and the SCAP policies available to those systems. Prerequisites The Insights client is installed on the system. You are logged in to a system where you have root privileges. You have run the insights-client --compliance-policies command. Procedure At the command line, enter Note Use a policy ID from the insights-client --compliance-policies command output. Verification steps Navigate to Security > Compliance > SCAP policies . Click the name of the policy you assigned the system to. Click the Systems tab. The system is listed for the policy. You can also run the insights-client --compliance-policies command to see if the Assigned value is set to True for the policy. For more information about the insights-client --compliance options, see the Client Configuration Guide for Red Hat Insights . 3.5. Unassigning systems from SCAP policies using the insights-client command You can unassign (remove) systems from SCAP policies using insights-client --compliance-unassign command. Optionally, you can use the command to create your own custom automations for your systems and SCAP policies. Prerequisite The Insights client is installed on the system. You are logged in to a system where you have root privileges. You have run the insights-client --compliance-policies command. Procedure At the command line, enter Note Use a policy ID from the insights-client --compliance-policies command output. Verification steps Navigate to Security > Compliance > SCAP policies . Click the name of the policy you assigned the system to. Click the Systems tab. The system is no longer listed. To find out if the Assigned value is set to False for the policy, run the insights-client --compliance-policies command again. Additional Resources For more information about the insights-client --compliance options, see the Client Configuration Guide for Red Hat Insights . 3.6. Viewing policy rules Insights Compliance displays rules in categorized groups, so that similar rules are close together. You can see rules grouped according to category or classification for the compliance checks that will take place for a policy. The nested group structure (or tree view) is the default view. The tree view provides additional contextual information that allows you to see categories of rules, and at times, multiple rules for a policy. The tree view also allows you to see rules that have editable values (for more information about editable rule values, see "Editing values for policy rules"). You can view rules in the tree view or the classic view. In the classic view, rules appear in a linear list. You can switch from the tree view to the classic view by toggling between the two buttons under View policy rules . To see rules listed in tree view format, click the tree view icon ( ). To see rules listed in the classic view format, click the classic view icon ( ). Note When you use the filter feature to search for a specific rule, the view automatically switches to the classic view. After you expand a rule to show additional information, it will stay in the expanded view, even if you switch to a different view. You can switch views when you are: Editing compliance policies Creating new SCAP policies Generating Compliance Service Reports (see "Exporting reports" topics)
[ "insights-client --compliance-policies", "insights-client --compliance-assign <ID>.", "insights-client --compliance-unassign <ID>" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems/compliance-managing-policies_intro-compliance
Chapter 4. Metrics data retention
Chapter 4. Metrics data retention The storage capacity required by PCP data logging is determined by the following factors: The logged metrics The logging interval The retention policy The default logging (sampling) interval is 60 seconds. The default retention policy is to compress archives older than one day and to keep archives for the last 14 days. You can increase the logging interval or shorten the retention policy to save storage space. If you require high-resolution sampling, you can decrease the logging interval. In such case, ensure that you have enough storage space. PCP archive logs are stored in the /var/log/pcp/pmlogger/ satellite.example.com directory. 4.1. Changing default logging interval You can change the default logging interval to either increase or decrease the sampling rate, at which the PCP metrics are logged. A larger interval results in a lower sampling rate. Procedure Open the /etc/pcp/pmlogger/control.d/local configuration file. Locate the LOCALHOSTNAME line. Append -t XX s , where XX is the required time interval in seconds. Save the file. Restart the pmlogger service: 4.2. Changing data retention policy You can change the data retention policy to control after how long the PCP data are archived and deleted. Procedure Open the /etc/sysconfig/pmlogger_timers file. Locate the PMLOGGER_DAILY_PARAMS line. If the line is commented, uncomment the line. Configure the following parameters: Ensure the default -E parameter is present. Append the -x parameter and add a value for the required number of days after which data is archived. Append the -k parameter and add a value for the number of days after which data is deleted. For example, the parameters -x 4 -k 7 specify that data will be compressed after 4 days and deleted after 7 days. Save the file. 4.3. Viewing data storage statistics You can list all available metrics, grouped by the frequency at which they are logged. For each group, you can also view the storage required to store the listed metrics, per day. Example storage statistics: Procedure To view data storage statistics, enter the following command on your Satellite Server:
[ "systemctl restart pmlogger", "logged every 60 sec: 61752 bytes or 84.80 Mbytes/day", "less /var/log/pcp/pmlogger/ satellite.example.com /pmlogger.log" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/monitoring_satellite_performance/metrics-data-retention_monitoring
Chapter 2. RHOSP server group configuration for HA instances
Chapter 2. RHOSP server group configuration for HA instances Create an instance server group before you create the RHOSP HA cluster node instances. Group the instances by affinity policy. If you configure multiple clusters, ensure that you have only one server group per cluster. The affinity policy you set for the server group can determine whether the cluster remains operational if the hypervisor fails. The default affinity policy is affinity . With this affinity policy, all of the cluster nodes could be created on the same RHOSP hypervisor. In this case, if the hypervisor fails the entire cluster fails. For this reason, set an affinity policy for the server group of anti-affinity or soft-anti-affinity . With an affinity policy of anti-affinity , the server group allows only one cluster node per Compute node. Attempting to create more cluster nodes than Compute nodes generates an error. While this configuration provides the highest protection against RHOSP hypervisor failures, it may require more resources to deploy large clusters than you have available. With an affinity policy of soft-anti-affinity , the server group distributes cluster nodes as evenly as possible across all Compute nodes. Although this provides less protection against hypervisor failures than a policy of anti-affinity , it provides a greater level of high availability than an affinity policy of affinity . Determining the server group affinity policy for your deployment requires balancing your cluster needs against the resources you have available by taking the following cluster components into account: The number of nodes in the cluster The number of RHOSP Compute nodes available The number of nodes required for cluster quorum to retain cluster operations For information about affinity and creating an instance server group, Compute scheduler filters and the Command Line Interface Reference .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/ref_recommended-rhosp-server-group-configuration_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
20.29. Storage Pool Commands
20.29. Storage Pool Commands Using libvirt , you can manage various storage solutions, including files, raw partitions, and domain-specific formats, used to provide the storage volumes visible as devices within virtual machines. For more detailed information, see the libvirt upstream pages . Many of the commands for administering storage pools are similar to the ones used for guest virtual machines. 20.29.1. Searching for a Storage Pool XML The virsh find-storage-pool-sources type command displays the XML describing all storage pools of a given source that could be found. Types include: netfs, disk, dir, fs, iscsi, logical, and gluster. Note that all of the types correspond to the storage back-end drivers and there are more types available (see the man page for more details). You can also further restrict the query for pools by providing an template source XML file using the --srcSpec option. Example 20.71. How to list the XML setting of available storage pools The following example outputs the XML setting of all logical storage pools available on the system: 20.29.2. Finding a storage Pool The virsh find-storage-pool-sources-as type command finds potential storage pool sources, given a specific type. Types include: netfs, disk, dir, fs, iscsi, logical, and gluster. Note that all of the types correspond to the storage back-end drivers and there are more types available (see the man page for more details). The command also takes the optional arguments host , port , and initiator . Each of these options will dictate what gets queried. Example 20.72. How to find potential storage pool sources The following example searches for a disk-based storage pool on the specified host machine. If you are unsure of your host name run the command virsh hostname first: # virsh find-storage-pool-sources-as disk --host myhost.example.com 20.29.3. Listing Storage Pool Information The virsh pool-info pool command lists the basic information about the specified storage pool object. This command requires the name or UUID of the storage pool. To retrieve this information, use the pool-list command. Example 20.73. How to retrieve information on a storage pool The following example retrieves information on the storage pool named vdisk : 20.29.4. Listing the Available Storage Pools The virsh pool-list command lists all storage pool objects known to libvirt . By default, only active pools are listed; but using the --inactive argument lists just the inactive pools, and using the --all argument lists all of the storage pools. This command takes the following optional arguments, which filter the search results: --inactive - lists the inactive storage pools --all - lists both active and inactive storage pools --persistent - lists the persistent storage pools --transient - lists the transient storage pools --autostart - lists the storage pools with autostart enabled --no-autostart - lists the storage pools with autostart disabled --type type - lists the pools that are only of the specified type --details - lists the extended details for the storage pools In addition to the above arguments, there are several sets of filtering flags that can be used to filter the content of the list. --persistent restricts the list to persistent pools, --transient restricts the list to transient pools, --autostart restricts the list to autostarting pools and finally --no-autostart restricts the list to the storage pools that have autostarting disabled. For all storage pool commands which require a --type , the pool types must be separated by comma. The valid pool types include: dir , fs , netfs , logical , disk , iscsi , scsi , mpath , rbd , sheepdog , and gluster . The --details option instructs virsh to additionally display pool persistence and capacity related information where available. Note When this command is used with older servers, it is forced to use a series of API calls with an inherent race, where a pool might not be listed or might appear more than once if it changed its state between calls while the list was being collected. Newer servers however, do not have this problem. Example 20.74. How to list all storage pools This example lists storage pools that are both active and inactive: 20.29.5. Refreshing a Storage Pool List The virsh pool-refresh pool command refreshes the list of storage volumes contained in storage pool. Example 20.75. How to refresh the list of the storage volumes in a storage pool The following example refreshes the list for the storage volume named vdisk : 20.29.6. Creating, Defining, and Starting Storage Pools 20.29.6.1. Building a storage pool The virsh pool-build pool command builds a storage pool using the name given in the command. The optional arguments --overwrite and --no-overwrite can only be used for an FS storage pool or with a disk or logical type based storage pool. Note that if [--overwrite] or [--no-overwrite] are not provided and the pool used is FS, it is assumed that the type is actually directory-based. In addition to the pool name, the storage pool UUID may be used as well. If --no-overwrite is specified, it probes to determine if a file system already exists on the target device, returning an error if it exists, or using mkfs to format the target device if it does not. If --overwrite is specified, then the mkfs command is executed and any existing data on the target device is overwritten. Example 20.76. How to build a storage pool The following example creates a disk-based storage pool named vdisk : 20.29.6.2. Defining a storage pool from an XML file The virsh pool-define file command creates, but does not start, a storage pool object from the XML file . Example 20.77. How to define a storage pool from an XML file This example assumes that you have already created an XML file with the settings for your storage pool. For example: The following command then builds a directory type storage pool from the XML file (named vdisk.xml in this example): To confirm that the storage pool was defined, run the virsh pool-list --all command as shown in Example 20.74, "How to list all storage pools" . When you run the command, however, the status will show as inactive as the pool has not been started. For directions on starting the storage pool see Example 20.81, "How to start a storage pool" . 20.29.6.3. Creating storage pools The virsh pool-create file command creates and starts a storage pool from its associated XML file. Example 20.78. How to create a storage pool from an XML file In this example assumes that you have already created an XML file with the settings for your storage pool. For example: The following example builds a directory-type storage pool based on the XML file (named vdisk.xml in this example): To confirm that the storage pool was created, run the virsh pool-list --all command as shown in Example 20.74, "How to list all storage pools" . When you run the command, however, the status will show as inactive as the pool has not been started. For directions on starting the storage pool see Example 20.81, "How to start a storage pool" . 20.29.6.4. Creating storage pools The virsh pool-create-as name command creates and starts a pool object name from the raw parameters given. This command takes the following options: --print-xml - displays the contents of the XML file, but does not define or create a storage pool from it --type type defines the storage pool type. See Section 20.29.4, "Listing the Available Storage Pools" for the types you can use. --source-host hostname - the source host physical machine for underlying storage --source-path path - the location of the underlying storage --source-dev path - the device for the underlying storage --source-name name - the name of the source underlying storage --source-format format - the format of the source underlying storage --target path - the target for the underlying storage Example 20.79. How to create and start a storage pool The following example creates and starts a storage pool named vdisk at the /mnt directory: 20.29.6.5. Defining a storage pool The virsh pool-define-as <name> command creates, but does not start, a pool object name from the raw parameters given. This command accepts the following options: --print-xml - displays the contents of the XML file, but does not define or create a storage pool from it --type type defines the storage pool type. See Section 20.29.4, "Listing the Available Storage Pools" for the types you can use. --source-host hostname - source host physical machine for underlying storage --source-path path - location of the underlying storage --source-dev devicename - device for the underlying storage --source-name sourcename - name of the source underlying storage --source-format format - format of the source underlying storage --target targetname - target for the underlying storage If --print-xml is specified, then it prints the XML of the pool object without creating or defining the pool. Otherwise, the pool requires a specified type to be built. For all storage pool commands which require a type , the pool types must be separated by comma. The valid pool types include: dir , fs , netfs , logical , disk , iscsi , scsi , mpath , rbd , sheepdog , and gluster . Example 20.80. How to define a storage pool The following example defines a storage pool named vdisk , but does not start it. After this command runs, use the virsh pool-start command to activate the storage pool: 20.29.6.6. Starting a storage pool The virsh pool-start pool command starts the specified storage pool, which was previously defined but inactive. This command may also use the UUID for the storage pool as well as the pool's name. Example 20.81. How to start a storage pool The following example starts the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" : To verify the pool has started run the virsh pool-list --all command and confirm that the status is active, as shown in Example 20.74, "How to list all storage pools" . 20.29.6.7. Auto-starting a storage pool The virsh pool-autostart pool command enables a storage pool to automatically start at boot. This command requires the pool name or UUID. To disable the pool-autostart command use the --disable argument in the command. Example 20.82. How to autostart a storage pool The following example autostarts the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" : 20.29.7. Stopping and Deleting Storage Pools The virsh pool-destroy pool command stops a storage pool. Once stopped, libvirt will no longer manage the pool but the raw data contained in the pool is not changed, and can be later recovered with the pool-create command. Example 20.83. How to stop a storage pool The following example stops the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" : The virsh pool-delete pool command destroys the resources used by the specified storage pool. It is important to note that this operation is non-recoverable and non-reversible. However, the pool structure will still exist after this command, ready to accept the creation of new storage volumes. Example 20.84. How to delete a storage pool The following sample deletes the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" . The virsh pool-undefine pool command undefines the configuration for an inactive pool. Example 20.85. How to undefine a storage pool The following examples undefines the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" . This makes your storage pool transient. 20.29.8. Creating an XML Dump File for a Pool The virsh pool-dumpxml pool command returns the XML information about the specified storage pool object. Using the option --inactive dumps the configuration that will be used on start of the pool instead of the current pool configuration. Example 20.86. How to retrieve a storage pool's configuration settings The following example retrieves the configuration settings for the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" . Once the command runs, the configuration file opens in the terminal: 20.29.9. Editing the Storage Pool's Configuration File The pool-edit pool command opens the specified storage pool's XML configuration file for editing. This method is the only method that should be used to edit an XML configuration file as it does error checking before applying. Example 20.87. How to edit a storage pool's configuration settings The following example edits the configuration settings for the vdisk storage pool that you built in Example 20.78, "How to create a storage pool from an XML file" . Once the command runs, the configuration file opens in your default editor:
[ "virsh find-storage-pool-sources logical <sources> <source> <device path='/dev/mapper/luks-7a6bfc59-e7ed-4666-a2ed-6dcbff287149'/> <name>RHEL_dhcp-2-157</name> <format type='lvm2'/> </source> </sources>", "virsh pool-info vdisk Name: vdisk UUID: State: running Persistent: yes Autostart: no Capacity: 125 GB Allocation: 0.00 Available: 125 GB", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes vdisk active no", "virsh pool-refresh vdisk Pool vdisk refreshed", "virsh pool-build vdisk Pool vdisk built", "<pool type=\"dir\"> <name>vdisk</name> <target> <path>/var/lib/libvirt/images</path> </target> </pool>", "virsh pool-define vdisk.xml Pool vdisk defined", "<pool type=\"dir\"> <name>vdisk</name> <target> <path>/var/lib/libvirt/images</path> </target> </pool>", "virsh pool-create vdisk.xml Pool vdisk created", "virsh pool-create-as --name vdisk --type dir --target /mnt Pool vdisk created", "virsh pool-define-as --name vdisk --type dir --target /mnt Pool vdisk defined", "virsh pool-start vdisk Pool vdisk started", "virsh pool-autostart vdisk Pool vdisk autostarted", "virsh pool-destroy vdisk Pool vdisk destroyed", "virsh pool-delete vdisk Pool vdisk deleted", "virsh pool-undefine vdisk Pool vdisk undefined", "virsh pool-dumpxml vdisk <pool type=\"dir\"> <name>vdisk</name> <target> <path>/var/lib/libvirt/images</path> </target> </pool>", "virsh pool-edit vdisk <pool type=\"dir\"> <name>vdisk</name> <target> <path>/var/lib/libvirt/images</path> </target> </pool>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Storage_pool_commands
8.213. selinux-policy
8.213. selinux-policy 8.213.1. RHBA-2014:1568 - selinux-policy bug fix and enhancement update Updated selinux-policy packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fixes BZ# 1062384 SELinux prevented the qemu-guest-agent process from executing the settimeofday() and hwclock() functions. Consequently, qemu-guest-agent was unable to set the system time. A new rule has been added to the SELinux policy and qemu-guest-agent can now set the time as expected. BZ# 1082640 Previously, SELinux did not allow the dhcpd daemon to change file ownership on the system. As a consequence, the ownership of files that dhcpd created was changed from the required dhcpd:dhcpd to root:root. The appropriate SELinux policy has been changed, and dhcpd is now able to change the file ownership on the system. BZ# 1097387 Due to the missing miscfiles_read_public_files Boolean, the user could not allow the sshd daemon to read public files used for file transfer services. The Boolean has been added to the SELinux policy, thus providing the user the ability to set sshd to read public files. BZ# 1111538 Due to a missing SELinux policy rule, the syslog daemon was unable to read the syslogd configuration files labeled with the syslog_conf_t SELinux context. With this update, the SELinux policy has been modified accordingly, and syslog now can read the syslog_conf_t files as expected. BZ# 1111581 Due to an insufficient SELinux policy rule, the thttpd daemon ran in the httpd_t domain. As a consequence, the daemon was unable to change file attributes of its log files. The SELinux policy has been modified to fix this bug, and SELinux no longer prevents thttpd from changing attributes of its log files. BZ# 1122866 Previously, SELinux did not allow the sssd daemon to write to the krb5 configuration file, thus the daemon was unable to make any changes in krb5. The SELinux policy has been changed with this update, and sssd can now write to krb5. BZ# 1127602 Due to a missing SELinux policy rule, the Samba daemons could not list the /tmp/ directory. The SELinux policy has been modified accordingly, and SELinux no longer prevents the Samba daemons from listing the /tmp/ directory. In addition, this update adds the following Enhancement BZ# 1069843 With this update, new SELinux policy rules have been added, and the following services now run in their own domains, not in the initrc_t domain: thttpd Users of selinux-policy are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/selinux-policy
21.3. Printer Configuration
21.3. Printer Configuration The Printer Configuration tool serves for printer configuring, maintenance of printer configuration files, print spool directories and print filters, and printer classes management. The tool is based on the Common Unix Printing System ( CUPS ). If you upgraded the system from a Red Hat Enterprise Linux version that used CUPS, the upgrade process preserved the configured printers. Important The cupsd.conf man page documents configuration of a CUPS server. It includes directives for enabling SSL support. However, CUPS does not allow control of the protocol versions used. Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on this for security. It is recommend that you use stunnel to provide a secure tunnel and disable SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 6 Security Guide . For ad-hoc secure connections to a remote system's Print Settings tool, use X11 forwarding over SSH as described in Section 14.5.1, "X11 Forwarding" . Note You can perform the same and additional operations on printers directly from the CUPS web application or command line. To access the application, in a web browser, go to http://localhost:631/ . For CUPS manuals see the links on the Home tab of the web site. 21.3.1. Starting the Printer Configuration Tool With the Printer Configuration tool you can perform various operations on existing printers and set up new printers. However, you can use also CUPS directly (go to http://localhost:631/ to access CUPS). On the panel, click System Administration Printing , or run the system-config-printer command from the command line to start the tool. The Printer Configuration window depicted in Figure 21.3, "Printer Configuration window" appears. Figure 21.3. Printer Configuration window
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Printer_Configuration
Upgrading connected Red Hat Satellite to 6.15
Upgrading connected Red Hat Satellite to 6.15 Red Hat Satellite 6.15 Upgrade Satellite Server and Capsule Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/upgrading_connected_red_hat_satellite_to_6.15/index
Chapter 1. Preparing to install on IBM Z and IBM LinuxONE
Chapter 1. Preparing to install on IBM Z and IBM LinuxONE 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 1.2. Choosing a method to install OpenShift Container Platform on IBM Z or IBM LinuxONE The OpenShift Container Platform installation program offers the following methods for deploying a cluster on IBM Z(R): Interactive : You can deploy a cluster with the web-based Assisted Installer . This method requires no setup for the installer, and is ideal for connected environments like IBM Z(R). Local Agent-based : You can deploy a cluster locally with the Agent-based Installer . It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command line interface (CLI). This approach is ideal for disconnected networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Table 1.1. IBM Z(R) installation options Assisted Installer Agent-based Installer User-provisioned installation Installer-provisioned installation IBM Z(R) with z/VM [✓] [✓] [✓] Restricted network IBM Z(R) with z/VM [✓] [✓] IBM Z(R) with RHEL KVM [✓] [✓] [✓] Restricted network IBM Z(R) with RHEL KVM [✓] [✓] IBM Z(R) in an LPAR [✓] Restricted network IBM Z(R) in an LPAR [✓] For more information about the installation process, see the Installation process . 1.2.1. User-provisioned infrastructure installation of OpenShift Container Platform on IBM Z User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the IBM Z(R) platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with RHEL KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform in an LPAR on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/preparing-to-install-on-ibm-z
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the requirements and processes behind setting up an automation mesh on a operator-based installation of Red Hat Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_managed_cloud_or_operator_environments/pr01
1.6. Using Network Kernel Tunables with sysctl
1.6. Using Network Kernel Tunables with sysctl Using certain kernel tunables through the sysctl utility, you can adjust network configuration on a running system and directly affect the networking performance. To change network settings, use the sysctl commands. For permanent changes that persist across system restarts, add lines to the /etc/sysctl.conf file. To display a list of all available sysctl parameters, enter as root : For more details on network kernel tunables using sysctl , see the Using PTP with Multiple Interfaces section in the System Administrator's Guide. For more information on network kernel tunables, see the Network Interface Tunables section in the Kernel Administration Guide.
[ "~]# sysctl -a" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-using_network_kernel_tunables_with_sysctl
Chapter 2. Life cycle dates
Chapter 2. Life cycle dates Red Hat Automation Hub Release General Availability Full support ends Maintenance Support 1 ends End of Life 4.4 December 2, 2021 June 2, 2022 December 2, 2022 June 2, 2023 4.2 November, 18, 2020 May 17, 2021 November 18, 2021 November 18, 2022
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/life_cycle_dates
Chapter 3. Performing additional configuration on Capsule Server
Chapter 3. Performing additional configuration on Capsule Server After installation, you can configure additional settings on your Capsule Server. 3.1. Configuring Capsule for host registration and provisioning Use this procedure to configure Capsule so that you can register and provision hosts using your Capsule Server instead of your Satellite Server. Procedure On Satellite Server, add the Capsule to the list of trusted proxies. This is required for Satellite to recognize hosts' IP addresses forwarded over the X-Forwarded-For HTTP header set by Capsule. For security reasons, Satellite recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of Capsules, or network ranges. Warning Do not use a network range that is too broad because that might cause a security risk. Enter the following command. Note that the command overwrites the list that is currently stored in Satellite. Therefore, if you have set any trusted proxies previously, you must include them in the command as well: The localhost entries are required, do not omit them. Verification List the current trusted proxies using the full help of Satellite installer: The current listing contains all trusted proxies you require. 3.2. Configuring pull-based transport for remote execution By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from Capsule Server to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to Capsule Server. The use of pull-based transport is not limited to those infrastructures. The pull-based transport comprises pull-mqtt mode on Capsules in combination with a pull client running on hosts. Note The pull-mqtt mode works only with the Script provider. Ansible and other providers will continue to use their default transport settings. The mode is configured per Capsule Server. Some Capsule Servers can be configured to use pull-mqtt mode while others use SSH. If this is the case, it is possible that one remote job on a given host will use the pull client and the job on the same host will use SSH. If you wish to avoid this scenario, configure all Capsule Servers to use the same mode. Procedure Enable the pull-based transport on your Capsule Server: Configure the firewall to allow the MQTT service on port 1883: Make the changes persistent: In pull-mqtt mode, hosts subscribe for job notifications to either your Satellite Server or any Capsule Server through which they are registered. Ensure that Satellite Server sends remote execution jobs to that same Satellite Server or Capsule Server: In the Satellite web UI, navigate to Administer > Settings . On the Content tab, set the value of Prefer registered through Capsule for remote execution to Yes . steps Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts . 3.3. Enabling OpenSCAP on Capsule Servers On Satellite Server and the integrated Capsule of your Satellite Server, OpenSCAP is enabled by default. To use the OpenSCAP plugin and content on external Capsules, you must enable OpenSCAP on each Capsule. Procedure To enable OpenSCAP, enter the following command: If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Managing configurations by using Puppet integration . 3.4. Adding lifecycle environments to Capsule Servers If your Capsule Server has the content functionality enabled, you must add an environment so that Capsule can synchronize content from Satellite Server and provide content to host systems. Do not assign the Library lifecycle environment to your Capsule Server because it triggers an automated Capsule sync every time the CDN updates a repository. This might consume multiple system resources on Capsules, network bandwidth between Satellite and Capsules, and available disk space on Capsules. You can use Hammer CLI on Satellite Server or the Satellite web UI. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule that you want to add a lifecycle to. Click Edit and click the Lifecycle Environments tab. From the left menu, select the lifecycle environments that you want to add to Capsule and click Submit . To synchronize the content on the Capsule, click the Overview tab and click Synchronize . Select either Optimized Sync or Complete Sync . For definitions of each synchronization type, see Recovering a Repository . CLI procedure To display a list of all Capsule Servers, on Satellite Server, enter the following command: Note the Capsule ID of the Capsule to which you want to add a lifecycle. Using the ID, verify the details of your Capsule: To view the lifecycle environments available for your Capsule Server, enter the following command and note the ID and the organization name: Add the lifecycle environment to your Capsule Server: Repeat for each lifecycle environment you want to add to Capsule Server. Synchronize the content from Satellite to Capsule. To synchronize all content from your Satellite Server environment to Capsule Server, enter the following command: To synchronize a specific lifecycle environment from your Satellite Server to Capsule Server, enter the following command: To synchronize all content from your Satellite Server to your Capsule Server without checking metadata: This equals selecting Complete Sync in the Satellite web UI. 3.5. Enabling power management on hosts To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on Capsule Server. Prerequisites All hosts must have a network interface of BMC type. Capsule Server uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in Managing hosts . Procedure To enable BMC, enter the following command: 3.6. Configuring DNS, DHCP, and TFTP on Capsule Server To configure the DNS, DHCP, and TFTP services on Capsule Server, use the satellite-installer command with the options appropriate for your environment. Any changes to the settings require entering the satellite-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values. Prerequisites You must have the correct network name ( dns-interface ) for the DNS server. You must have the correct interface name ( dhcp-interface ) for the DHCP server. Contact your network administrator to ensure that you have the correct settings. Procedure Enter the satellite-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services: You can monitor the progress of the satellite-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/satellite.log . Additional resources For more information about the satellite-installer command, enter satellite-installer --help . For more information about configuring DNS, DHCP, and TFTP externally, see Chapter 4, Configuring Capsule Server with external services . For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts .
[ "satellite-installer --foreman-trusted-proxies \"127.0.0.1/8\" --foreman-trusted-proxies \"::1\" --foreman-trusted-proxies \" My_IP_address \" --foreman-trusted-proxies \" My_IP_range \"", "satellite-installer --full-help | grep -A 2 \"trusted-proxies\"", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt", "firewall-cmd --add-service=mqtt", "firewall-cmd --runtime-to-permanent", "satellite-installer --enable-foreman-proxy-plugin-openscap --foreman-proxy-plugin-openscap-ansible-module true --foreman-proxy-plugin-openscap-puppet-module true", "hammer capsule list", "hammer capsule info --id My_capsule_ID", "hammer capsule content available-lifecycle-environments --id My_capsule_ID", "hammer capsule content add-lifecycle-environment --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID --organization \" My_Organization \"", "hammer capsule content synchronize --id My_capsule_ID", "hammer capsule content synchronize --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID", "hammer capsule content synchronize --id My_capsule_ID --skip-metadata-check true", "satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"", "satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/performing-additional-configuration-on-capsule-server_capsule
19.3. Creating and Editing Password Policies
19.3. Creating and Editing Password Policies A password policy can be selective; it may only define certain elements. A global password policy sets defaults that are used for every user entry, unless a group policy takes priority. Note A global policy always exists, so there is no reason to add a global password policy. Group-level policies override the global policies and offer specific policies that only apply to group members. Password policies are not cumulative. Either a group policy or the global policy is in effect for a user or group, but not both simultaneously. Group-level policies do not exist by default, so they must be created manually. Note It is not possible to set a password policy for a non-existent group. 19.3.1. Creating Password Policies in the Web UI Click the Policy tab, and then click the Password Policies subtab. All of the policies in the UI are listed by group. The global password policy is defined by the global_policy group. Click the group link. Click the Add link at the top. In the pop-up box, select the group for which to create the password policy. Set the priority of the policy. The higher the number, the lower the priority. Conversely, the highest priority policy has the lowest number. Only one password policy is in effect for a user, and that is the highest priority policy. Note The priority cannot be changed in the UI once the policy is created. Click the Add and Edit button so that the policy form immediately opens. Set the policy fields. Leaving a field blank means that attribute is not added the password policy configuration. Max lifetime sets the maximum amount of time, in days, that a password is valid before a user must reset it. Min lifetime sets the minimum amount of time, in hours, that a password must remain in effect before a user is permitted to change it. This prevents a user from attempting to change a password back immediately to an older password or from cycling through the password history. History size sets how many passwords are stored. A user cannot re-use a password that is still in the password history. Character classes sets the number of different categories of character that must be used in the password. This does not set which classes must be used; it sets the number of different (unspecified) classes which must be used in a password. For example, a character class can be a number, special character, or capital; the complete list of categories is in Table 19.1, "Password Policy Settings" . This is part of setting the complexity requirements. Min length sets how many characters must be in a password. This is part of setting the complexity requirements. 19.3.2. Creating Password Policies with the Command Line Password policies are added with the pwpolicy-add command. For example: Note Setting an attribute to a blank value effectively removes that attribute from the password policy. 19.3.3. Editing Password Policies with the Command Line As with most IdM entries, a password policy is edited by using a *-mod command, pwpolicy-mod , and then the policy name. However, there is one difference with editing password policies: there is a global policy which always exists. Editing a group-level password policy is slightly different than editing the global password policy. Editing a group-level password policy follows the standard syntax of *-mod commands. It uses the pwpolicy-mod command, the name of the policy entry, and the attributes to change. For example: To edit the global password policy, use the pwpolicy-mod command with the attributes to change, but without specifying a password policy name . For example:
[ "kinit admin ipa pwpolicy-add groupName --attribute-value", "kinit admin ipa pwpolicy-add exampleGroup --minlife=7 --maxlife=49 --history= --priority=1 Group: exampleGroup Max lifetime (days): 49 Min lifetime (hours): 7 Priority: 1", "[jsmith@ipaserver ~]USD ipa pwpolicy-mod exampleGroup --lockouttime=300 --history=5 --minlength=8", "[jsmith@ipaserver ~]USD ipa pwpolicy-mod --lockouttime=300 --history=5 --minlength=8" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Setting_Different_Password_Policies_for_Different_User_Groups
Chapter 7. Ceph File System quotas
Chapter 7. Ceph File System quotas As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory. 7.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Make sure that the attr package is installed. 7.2. Ceph File System quotas The Ceph File System (CephFS) quotas allow you to restrict the number of bytes or the number of files stored in the directory structure. Ceph File System quotas are fully supported using a FUSE client or using Kernel clients, version 4.17 or newer. Limitations CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system. Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop. When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the /home/ directory and the quota is configured on /home/ , the client cannot enforce that quota on the directory /home/user/ . Snapshot file data that has been deleted or changed does not count towards the quota. No support for quotas with NFS clients when using setxattr , and no support for file-level quotas on NFS. To use quotas on NFS shares, you can export them using subvolumes and setting the --size option. 7.3. Viewing quotas Use the getfattr command and the ceph.quota extended attributes to view the quota settings for a directory. Note If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0 , the quota is not set. Prerequisites Root-level access to the Ceph client node. The attr package is installed. Procedure To view CephFS quotas. Using a byte-limit quota: Syntax Example In this example, 100000000 equals 100 MB. Using a file-limit quota: Syntax Example In this example, 10000 equals 10,000 files. Additional Resources See the getfattr(1) manual page for more information. 7.4. Setting quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to set the quota for a directory. Prerequisites Root-level access to the Ceph client node. The attr package is installed. Procedure To set CephFS quotas. Using a byte-limit quota: Syntax Example In this example, 100000000 bytes equals 100 MB. Using a file-limit quota: Syntax Example In this example, 10000 equals 10,000 files. Additional Resources See the setfattr(1) manual page for more information. 7.5. Removing quotas This section describes how to use the setfattr command and the ceph.quota extended attributes to remove a quota from a directory. Prerequisites Root-level access to the Ceph client node. Make sure that the attr package is installed. Procedure To remove CephFS quotas. Using a byte-limit quota: Syntax Example Using a file-limit quota: Syntax Example Additional Resources See the setfattr(1) manual page for more information. 7.6. Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . See the getfattr(1) manual page for more information. See the setfattr(1) manual page for more information.
[ "getfattr -n ceph.quota.max_bytes DIRECTORY", "getfattr -n ceph.quota.max_bytes /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_bytes=\"100000000\"", "getfattr -n ceph.quota.max_files DIRECTORY", "getfattr -n ceph.quota.max_files /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_files=\"10000\"", "setfattr -n ceph.quota.max_bytes -v 100000000 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 100000000 /cephfs/", "setfattr -n ceph.quota.max_files -v 10000 DIRECTORY", "setfattr -n ceph.quota.max_files -v 10000 /cephfs/", "setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 0 /mnt/cephfs/", "setfattr -n ceph.quota.max_files -v 0 DIRECTORY", "setfattr -n ceph.quota.max_files -v 0 /mnt/cephfs/" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/ceph-file-system-quotas
Chapter 4. Postinstallation cluster tasks
Chapter 4. Postinstallation cluster tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements. 4.1. Available cluster customizations You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available. Note If you install your cluster on IBM Z(R), not all features and functions are available. You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider. For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1 4.1.1. Cluster configuration resources All cluster configuration resources are globally scoped (not namespaced) and named cluster . Resource name Description apiserver.config.openshift.io Provides API server configuration such as certificates and certificate authorities . authentication.config.openshift.io Controls the identity provider and authentication configuration for the cluster. build.config.openshift.io Controls default and enforced configuration for all builds on the cluster. console.config.openshift.io Configures the behavior of the web console interface, including the logout behavior . featuregate.config.openshift.io Enables FeatureGates so that you can use Tech Preview features. image.config.openshift.io Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). ingress.config.openshift.io Configuration details related to routing such as the default domain for routes. oauth.config.openshift.io Configures identity providers and other behavior related to internal OAuth server flows. project.config.openshift.io Configures how projects are created including the project template. proxy.config.openshift.io Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. scheduler.config.openshift.io Configures scheduler behavior such as profiles and default node selectors. 4.1.2. Operator configuration resources These configuration resources are cluster-scoped instances, named cluster , which control the behavior of a specific component as owned by a particular Operator. Resource name Description consoles.operator.openshift.io Controls console appearance such as branding customizations config.imageregistry.operator.openshift.io Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. config.samples.operator.openshift.io Configures the Samples Operator to control which example image streams and templates are installed on the cluster. 4.1.3. Additional configuration resources These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances. Resource name Instance name Namespace Description alertmanager.monitoring.coreos.com main openshift-monitoring Controls the Alertmanager deployment parameters. ingresscontroller.operator.openshift.io default openshift-ingress-operator Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. 4.1.4. Informational Resources You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly. Resource name Instance name Description clusterversion.config.openshift.io version In OpenShift Container Platform 4.17, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster . dns.config.openshift.io cluster You cannot modify the DNS settings for your cluster. You can check the DNS Operator status . infrastructure.config.openshift.io cluster Configuration details allowing the cluster to interact with its cloud provider. network.config.openshift.io cluster You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation . 4.2. Adding worker nodes After you deploy your OpenShift Container Platform cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster. 4.2.1. Adding worker nodes to an on-premise cluster For on-premise clusters, you can add worker nodes by using the OpenShift Container Platform CLI ( oc ) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster. This process can be used regardless of how you installed your cluster. You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node. Any configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes. Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node. Adding worker nodes to an on-premise cluster 4.2.2. Adding worker nodes to installer-provisioned infrastructure clusters For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet object to match the number of available bare-metal hosts. To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console. Adding worker nodes using the web console Adding worker nodes using YAML in the web console Manually adding a worker node to an installer-provisioned infrastructure cluster 4.2.3. Adding worker nodes to user-provisioned infrastructure clusters For user-provisioned infrastructure clusters, you can add worker nodes by using a RHEL or RHCOS ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster. Adding RHCOS worker nodes to a user-provisioned infrastructure cluster Adding RHEL worker nodes to a user-provisioned infrastructure cluster 4.2.4. Adding worker nodes to clusters managed by the Assisted Installer For clusters managed by the Assisted Installer, you can add worker nodes by using the Red Hat OpenShift Cluster Manager console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files. Adding worker nodes using the OpenShift Cluster Manager Adding worker nodes using the Assisted Installer REST API Manually adding worker nodes to a SNO cluster 4.2.5. Adding worker nodes to clusters managed by the multicluster engine for Kubernetes For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console. Creating your cluster with the console 4.3. Adjust worker nodes If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them. 4.3.1. Understanding the difference between compute machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 4.3.2. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io 4.3.3. The compute machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0 unless you first relocate the router pods. Note Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption. 4.3.4. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a compute machine set or editing the node directly: Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that compute machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3 4.4. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 4.4.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you to control the reaction of the cluster to latency issues without needing to determine the best values by using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates its status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds. The Kubernetes Controller Manager waits 40 seconds ( node-monitor-grace-period ) for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod is on a node that has the NoExecute taint, the pod runs according to tolerationSeconds . If the node has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 4.4.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. 4.5. Managing control plane machines Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of OpenShift Container Platform that you installed. For more information, see Getting started with control plane machine sets . 4.6. Creating infrastructure machine sets for production environments You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets . To create an infrastructure node, you can use a machine set , assign a label to the nodes , or use a machine config pool . For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds . Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 4.6.1. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 4.6.2. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors . 4.6.3. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Important Creating a custom machine configuration pool overrides default worker pool configurations if they refer to the same file or unit. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 4.7. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control. 4.7.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add the taint with NoExecute Effect along with the above taint with NoSchedule Effect: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has the key node-role.kubernetes.io/infra and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that tolerate the taint. The effect will remove any existing pods from the node that do not have a matching toleration. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the value of the key-value pair taint that you added to the node. 4 Specify the effect that you added to the node. 5 Specify the key that you added to the node. 6 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 7 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. 4.8. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created. 4.8.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3 Because the role list includes infra , the pod is running on the correct node. 4.8.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 4.8.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 4.9. Applying autoscaling to your cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. For more information, see Applying autoscaling to an OpenShift Container Platform cluster . 4.10. Configuring Linux cgroup As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can change between cgroup v1 and cgroup v2, as needed. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.12 or later. You are logged in to the cluster as a user with administrative privileges. Procedure Enable cgroup v1 on nodes: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.cgroupMode: "v1" : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: "v1" 1 ... 1 Enables cgroup v1. Verification Check the machine configs to see that the new machine configs were added: USD oc get mc Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 New machine configs are created, as expected. Check that the new kernelArguments were added to the new machine configs: USD oc describe mc <name> Example output for cgroup v1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3 1 Disables cgroup v2. 2 Enables cgroup v1 in systemd. 3 Enables the Linux Pressure Stall Information (PSI) feature. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.30.3 After a node returns to the Ready state, start a debug session for that node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check that the sys/fs/cgroup/cgroup2fs file is present on your nodes. This file is created by cgroup v1: USD stat -c %T -f /sys/fs/cgroup Example output cgroup2fs Additional resources Configuring the Linux cgroup version on your nodes 4.11. Enabling Technology Preview features using FeatureGates You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR). 4.11.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. ( ExternalCloudProvider ) Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. ( NodeSwap ) OpenStack Machine API Provider. This gate has no effect and is planned to be removed from this feature set in a future release. ( MachineAPIProviderOpenStack ) Insights Operator. Enables the InsightsDataGather CRD, which allows users to configure some Insights data gathering options. The feature set also enables the DataGather CRD, which allows users to run Insights data gathering on-demand. ( InsightsConfigAPI ) Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. ( DynamicResourceAllocation ) Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. ( OpenShiftPodSecurityAdmission ) StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. ( MaxUnavailableStatefulSet ) gcpLabelsTags vSphereStaticIPs routeExternalCertificate automatedEtcdBackup gcpClusterHostedDNS vSphereControlPlaneMachineset dnsNameResolver machineConfigNodes metricsServer installAlternateInfrastructureAWS mixedCPUsAllocation managedBootImages onClusterBuild signatureStores SigstoreImageVerification DisableKubeletCloudCredentialProviders BareMetalLoadBalancer ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere HardwareSpeed KMSv1 NetworkDiagnosticsConfig VSphereDriverConfiguration ExternalOIDC ChunkSizeMiB ClusterAPIInstallGCP ClusterAPIInstallPowerVS EtcdBackendQuota InsightsConfig InsightsOnDemandDataGather MetricsCollectionProfiles NewOLM NodeDisruptionPolicy PinnedImages PlatformOperators ServiceAccountTokenNodeBinding TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters VolumeGroupSnapshot AdditionalRoutingCapabilities BootcNodeManagement ClusterMonitoringConfig DNSNameResolver ManagedBootImagesAWS NetworkSegmentation OVNObservability PersistentIPsForVirtualization ProcMountType RouteAdvertisements UserNamespacesSupport AWSEFSDriverVolumeMetrics AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CloudDualStackNodeIPs ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP IngressControllerLBSubnetsAWS MultiArchInstallAWS MultiArchInstallGCP NetworkLiveMigration PrivateHostedZoneAWS SetEIPForNLBIngressController ValidatingAdmissionPolicy 4.11.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 4.11.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 4.12. etcd tasks Back up etcd, enable or disable etcd encryption, or defragment etcd data. Note If you deployed a bare-metal cluster, you can scale the cluster up to 5 nodes as part of your post-installation tasks. For more information, see Node scaling for etcd . 4.12.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 4.12.2. Supported encryption types The following encryption types are supported for encrypting etcd data in OpenShift Container Platform: AES-CBC Uses AES-CBC with PKCS#7 padding and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. AES-GCM Uses AES-GCM with a random nonce and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. 4.12.3. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. You can encrypt the etcd database in either AES-GCM or AES-CBC encryption. Note To migrate your etcd database from one encryption type to the other, you can modify the API server's spec.encryption.type field. Migration of the etcd data to the new encryption type occurs automatically. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the spec.encryption.type field to aesgcm or aescbc : spec: encryption: type: aesgcm 1 1 Set to aesgcm for AES-GCM encryption or aescbc for AES-CBC encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of the etcd database. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 4.12.4. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. 4.12.5. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 4.12.6. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 4.12.6.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 4.12.6.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 4.12.7. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-apiserver file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the kube-apiserver containers are stopped by running: USD sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-controller-manager file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp Verify that the kube-controller-manager containers are stopped by running: USD sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-scheduler file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp Verify that the kube-scheduler containers are stopped by using: USD sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the etcd data directory to a different location with the following example: USD sudo mv -v /var/lib/etcd/ /tmp If the /etc/kubernetes/manifests/keepalived.yaml file exists and the node is deleted, follow these steps: Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp Verify that any containers managed by the keepalived daemon are stopped: USD sudo crictl ps --name keepalived The output of this command should be empty. If it is not empty, wait a few minutes and check again. Check if the control plane has any Virtual IPs (VIPs) assigned to it: USD ip -o address | egrep '<api_vip>|<ingress_vip>' For each reported VIP, run the following command to remove it: USD sudo ip address del <reported_vip> dev <reported_vip_device> Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP: USD ip -o address | grep <api_vip> The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml The cluster-restore.sh script must show that etcd , kube-apiserver , kube-controller-manager , and kube-scheduler pods are stopped and then started at the end of the restore process. Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3 It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem Restart the kubelet service on all control plane hosts. From the recovery host, run: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending Certificate Signing Requests (CSRs): Note Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. Get the list of current CSRs by running: USD oc get csr Example output 1 1 2 A pending kubelet serving CSR, requested by the node for the kubelet serving endpoint. 3 4 A pending kubelet client CSR, requested with the node-bootstrapper node bootstrap credentials. Review the details of a CSR to verify that it is valid by running: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR by running: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR by running: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running by using: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. If you are using the OVNKubernetes network plugin, you must restart ovnkube-controlplane pods. Delete all of the ovnkube-controlplane pods by running: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane Verify that all of the ovnkube-controlplane pods were redeployed by using: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node: Important Restart OVN-Kubernetes pods in the following order The recovery control plane host The other control plane hosts (if available) The other nodes Note Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail , then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail . Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run: USD sudo rm -f /var/lib/ovn-ic/etc/*.db Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command: USD sudo systemctl restart ovs-vswitchd ovsdb-server Delete the ovnkube-node pod on the node by running the following command, replacing <node> with the name of the node that you are restarting: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Check the status of the OVN pods by running the following command: USD oc get po -n openshift-ovn-kubernetes If any OVN pods are in the Terminating status, delete the node that is running that OVN pod by running the following command. Replace <node> with the name of the node you are deleting: USD oc delete node <node> Use SSH to log in to the OVN pod node with the Terminating status by running the following command: USD ssh -i <ssh-key-path> core@<node> Move all PEM files from the /var/lib/kubelet/pki directory by running the following command: USD sudo mv /var/lib/kubelet/pki/* /tmp Restart the kubelet service by running the following command: USD sudo systemctl restart kubelet.service Return to the recovery etcd machines by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending Approve all new CSRs by running the following command, replacing csr-<uuid> with the name of the CSR: oc adm certificate approve csr-<uuid> Verify that the node is back by running the following command: USD oc get nodes Verify that the ovnkube-node pod is running again with: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Note It might take several minutes for the pods to restart. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Delete the machine of the lost control plane host by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. A new machine is automatically provisioned after deleting the machine of the lost control plane host. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. The etcd redeployment starts. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Turn the quorum guard back on by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by running: USD oc get etcd/cluster -oyaml Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. kube-apiserver will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run: Force a new rollout for kube-apiserver : USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager by running the following command: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by running: USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the kube-scheduler by running: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by using: USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Monitor the platform Operators by running: USD oc adm wait-for-stable-cluster This process can take up to 15 minutes. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami Additional resources Recommended etcd practices Installing a user-provisioned cluster on bare metal Replacing a bare-metal control plane node 4.12.8. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 4.13. Pod disruption budgets Understand and configure pod disruption budgets. 4.13.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 4.13.2. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 4.13.3. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Note It is recommended to set the unhealthyPodEvictionPolicy field to AlwaysAllow in the PodDisruptionBudget object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation
[ "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.30.3", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aesgcm 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp", "sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp", "sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"", "sudo mv -v /var/lib/etcd/ /tmp", "sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp", "sudo crictl ps --name keepalived", "ip -o address | egrep '<api_vip>|<ingress_vip>'", "sudo ip address del <reported_vip> dev <reported_vip_device>", "ip -o address | grep <api_vip>", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane", "sudo rm -f /var/lib/ovn-ic/etc/*.db", "sudo systemctl restart ovs-vswitchd ovsdb-server", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get po -n openshift-ovn-kubernetes", "oc delete node <node>", "ssh -i <ssh-key-path> core@<node>", "sudo mv /var/lib/kubelet/pki/* /tmp", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending", "adm certificate approve csr-<uuid>", "oc get nodes", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc adm wait-for-stable-cluster", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/post-install-cluster-tasks
Chapter 5. Kourier and Istio ingresses
Chapter 5. Kourier and Istio ingresses OpenShift Serverless supports the following two ingress solutions: Kourier Istio using Red Hat OpenShift Service Mesh The default is Kourier. 5.1. Kourier and Istio ingress solutions 5.1.1. Kourier Kourier is the default ingress solution for OpenShift Serverless. It has the following properties: It is based on envoy proxy. It is simple and lightweight. It provides the basic routing functionality that Serverless needs to provide its set of features. It supports basic observability and metrics. It supports basic TLS termination of Knative Service routing. It provides only limited configuration and extension options. 5.1.2. Istio using OpenShift Service Mesh Using Istio as the ingress solution for OpenShift Serverless enables an additional feature set that is based on what Red Hat OpenShift Service Mesh offers: Native mTLS between all connections Serverless components are part of a service mesh Additional observability and metrics Authorization and authentication support Custom rules and configuration, as supported by Red Hat OpenShift Service Mesh However, the additional features come with a higher overhead and resource consumption. For details, see the Red Hat OpenShift Service Mesh documentation. See the "Integrating Service Mesh with OpenShift Serverless" section of Serverless documentation for Istio requirements and installation instructions. 5.1.3. Traffic configuration and routing Regardless of whether you use Kourier or Istio, the traffic for a Knative Service is configured in the knative-serving namespace by the net-kourier-controller or the net-istio-controller respectively. The controller reads the KnativeService and its child custom resources to configure the ingress solution. Both ingress solutions provide an ingress gateway pod that becomes part of the traffic path. Both ingress solutions are based on Envoy. By default, Serverless has two routes for each KnativeService object: A cluster-external route that is forwarded by the OpenShift router, for example myapp-namespace.example.com . A cluster-local route containing the cluster domain, for example myapp.namespace.svc.cluster.local . This domain can and should be used to call Knative services from Knative or other user workloads. The ingress gateway can forward requests either in the serve mode or the proxy mode: In the serve mode, requests go directly to the Queue-Proxy sidecar container of the Knative service. In the proxy mode, requests first go through the Activator component in the knative-serving namespace. The choice of mode depends on the configuration of Knative, the Knative service, and the current traffic. For example, if a Knative Service is scaled to zero, requests are sent to the Activator component, which acts as a buffer until a new Knative service pod is started.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/kourier-and-istio-ingresses
Chapter 2. Cluster Observability Operator overview
Chapter 2. Cluster Observability Operator overview The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system. The COO deploys the following monitoring components: Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Thanos Querier (optional) - Enables querying of Prometheus instances from a central location. Alertmanager (optional) - Provides alert configuration capabilities for different services. UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. 2.1. COO compared to default monitoring stack The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. The key differences between COO and the default in-cluster monitoring stack are shown in the following table: Feature COO Default monitoring stack Scope and integration Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. However, it lacks direct integration with OpenShift Container Platform and typically requires an external Grafana instance for dashboards. Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. There is deep integration into OpenShift Container Platform including console dashboards and alert management in the console. Configuration and customization Broader configuration options including data retention periods, storage methods, and collected data types. The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Built-in configurations with limited customization options. Data retention and storage Long-term data retention, supporting historical analysis and capacity planning Shorter data retention times, focusing on short-term monitoring and real-time detection. 2.2. Key advantages of using COO Deploying COO helps you address monitoring requirements that are hard to achieve using the default monitoring stack. 2.2.1. Extensibility You can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. You can receive cluster-specific metrics from core platform monitoring through federation. COO supports advanced monitoring scenarios like trend forecasting and anomaly detection. 2.2.2. Multi-tenancy support You can create monitoring stacks per user namespace. You can deploy multiple stacks per namespace or a single stack for multiple namespaces. COO enables independent configuration of alerts and receivers for different teams. 2.2.3. Scalability Supports multiple monitoring stacks on a single cluster. Enables monitoring of large clusters through manual sharding. Addresses cases where metrics exceed the capabilities of a single Prometheus instance. 2.2.4. Flexibility Decoupled from OpenShift Container Platform release cycles. Faster release iterations and rapid response to changing requirements. Independent management of alerting rules. 2.3. Target users for COO COO is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. 2.3.1. Enterprise-level users and administrators Enterprise users require in-depth monitoring capabilities for OpenShift Container Platform clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. 2.3.2. Operations teams in multi-tenant environments With multi-tenancy support, COO allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. 2.3.3. Development and operations teams COO provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. 2.4. Using Server-Side Apply to customize Prometheus resources Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. Server-Side Apply Declarative configuration management by updating a resource's state without needing to delete and recreate it. Field management Users can specify which fields of a resource they want to update, without affecting the other fields. Managed fields Kubernetes stores metadata about who manages each field of an object in the managedFields field within metadata. Conflicts If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. Merge strategy Server-Side Apply merges fields based on the actor who manages them. Procedure Add a MonitoringStack resource using the following configuration: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo A Prometheus resource named sample-monitoring-stack is generated in the coo-demo namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields Example output managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{"type":"Reconciled"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{"shardID":"0"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status Check the metadata.managedFields values, and observe that some fields in metadata and spec are managed by the MonitoringStack resource. Modify a field that is not controlled by the MonitoringStack resource: Change spec.enforcedSampleLimit , which is a field not set by the MonitoringStack resource. Create the file prom-spec-edited.yaml : prom-spec-edited.yaml apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000 Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Note You must use the --server-side flag. Get the changed Prometheus object and note that there is one more section in managedFields which has spec.enforcedSampleLimit : USD oc get prometheus -n coo-demo Example output managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply 1 managedFields 2 spec.enforcedSampleLimit Modify a field that is managed by the MonitoringStack resource: Change spec.LogLevel , which is a field managed by the MonitoringStack resource, using the following YAML configuration: # changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1 1 spec.logLevel has been added Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Example output error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts Notice that the field spec.logLevel cannot be changed using Server-Side Apply, because it is already managed by observability-operator . Use the --force-conflicts flag to force the change. USD oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts Example output prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied With --force-conflicts flag, the field can be forced to change, but since the same field is also managed by the MonitoringStack resource, the Observability Operator detects the change, and reverts it back to the value set by the MonitoringStack resource. Note Some Prometheus fields generated by the MonitoringStack resource are influenced by the fields in the MonitoringStack spec stanza, for example, logLevel . These can be changed by changing the MonitoringStack spec . To change the logLevel in the Prometheus object, apply the following YAML to change the MonitoringStack resource: apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info To confirm that the change has taken place, query for the log level by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' Example output info Note If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. For example, you are managing a field enforcedSampleLimit which is not generated by the MonitoringStack resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for enforcedSampleLimit , this will overide the value you have previously set. The Prometheus object generated by the MonitoringStack resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. Additional resources Kubernetes documentation for Server-Side Apply (SSA)
[ "apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo", "oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields", "managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status", "apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000", "oc apply -f ./prom-spec-edited.yaml --server-side", "oc get prometheus -n coo-demo", "managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply", "changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1", "oc apply -f ./prom-spec-edited.yaml --server-side", "error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts", "oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts", "prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied", "apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info", "oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'", "info" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cluster_observability_operator/cluster-observability-operator-overview
Chapter 14. Provisioning Cloud Instances in Amazon EC2
Chapter 14. Provisioning Cloud Instances in Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides public cloud compute resources. Using Satellite, you can interact with Amazon EC2's public API to create cloud instances and control their power management states. Use the procedures in this chapter to add a connection to an Amazon EC2 account and provision a cloud instance. 14.1. Prerequisites for Amazon EC2 Provisioning The requirements for Amazon EC2 provisioning include: A Capsule Server managing a network in your EC2 environment. Use a Virtual Private Cloud (VPC) to ensure a secure network between the hosts and Capsule Server. An Amazon Machine Image (AMI) for image-based provisioning. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. 14.2. Installing Amazon EC2 Plugin Install the Amazon EC2 plugin to attach an EC2 compute resource provider to Satellite. This allows you to manage and deploy hosts to EC2. Procedure Install the EC2 compute resource provider on your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the compute resources tab to verify the installation of the Amazon EC2 plugin. 14.3. Adding an Amazon EC2 Connection to the Satellite Server Use this procedure to add the Amazon EC2 connection in Satellite Server's compute resources. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisite An AWS EC2 user performing this procedure needs the AmazonEC2FullAccess permissions. You can attach these permissions from AWS. Time Settings and Amazon Web Services Amazon Web Services uses time settings as part of the authentication process. Ensure that Satellite Server's time is correctly synchronized. Ensure that an NTP service, such as ntpd or chronyd , is running properly on Satellite Server. Failure to provide the correct time to Amazon Web Services can lead to authentication failures. For more information about synchronizing time in Satellite, see Synchronizing Time in Installing Satellite Server in a Connected Network Environment . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click Create Compute Resource . In the Name field, enter a name to identify the Amazon EC2 compute resource. From the Provider list, select EC2 . In the Description field, enter information that helps distinguish the resource for future use. Optional: From the HTTP proxy list, select an HTTP proxy to connect to external API services. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 14.4, "Using an HTTP Proxy with Compute Resources" . In the Access Key and Secret Key fields, enter the access keys for your Amazon EC2 account. For more information, see Managing Access Keys for your AWS Account on the Amazon documentation website. Optional: Click the Load Regions button to populate the Regions list. From the Region list, select the Amazon EC2 region or data center to use. Click the Locations tab and ensure that the location you want to use is selected, or add a different location. Click the Organizations tab and ensure that the organization you want to use is selected, or add a different organization. Click Submit to save the Amazon EC2 connection. Select the new compute resource and then click the SSH keys tab, and click Download to save a copy of the SSH keys to use for SSH authentication. Until BZ1793138 is resolved, you can download a copy of the SSH keys only immediately after creating the Amazon EC2 compute resource. If you require SSH keys at a later stage, follow the procedure in Section 14.9, "Connecting to an Amazon EC2 instance using SSH" . CLI procedure Create the connection with the hammer compute-resource create command. Use --user and --password options to add the access key and secret key respectively. 14.4. Using an HTTP Proxy with Compute Resources In some cases, the EC2 compute resource that you use might require a specific HTTP proxy to communicate with Satellite. In Satellite, you can create an HTTP proxy and then assign the HTTP proxy to your EC2 compute resource. However, if you configure an HTTP proxy for Satellite in Administer > Settings , and then add another HTTP proxy for your compute resource, the HTTP proxy that you define in Administer > Settings takes precedence. Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies , and select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. Optional: Enter a username and password to authenticate to the HTTP proxy, if your HTTP proxy requires authentication. Click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . 14.5. Creating an Image for Amazon EC2 You can create images for Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. Click Create Image . In the Name field, enter a meaningful and unique name for your EC2 image. From the Operating System list, select an operating system to associate with the image. From the Architecture list, select an architecture to associate with the image. In the Username field, enter the username needed to SSH into the machine. In the Image ID field, enter the image ID provided by Amazon or an operating system vendor. Optional: Select the User Data check box to enable support for user data input. Optional: Set an Iam Role for Fog to use when creating this image. Click Submit to save your changes to Satellite. 14.6. Adding Amazon EC2 Images to Satellite Server Amazon EC2 uses image-based provisioning to create hosts. You must add image details to your Satellite Server. This includes access details and image location. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and select an Amazon EC2 connection. Click the Images tab, and then click Create Image . In the Name field, enter a name to identify the image for future use. From the Operating System list, select the operating system that corresponds with the image you want to add. From the Architecture list, select the operating system's architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. In the Image ID field, enter the Amazon Machine Image (AMI) ID for the image. This is usually in the following format: ami-xxxxxxxx . Optional: Select the User Data checkbox if the images support user data input, such as cloud-init data. If you enable user data, the Finish scripts are automatically disabled. This also applies in reverse: if you enable the Finish scripts, this disables user data. Optional: In the IAM role field, enter the Amazon security role used for creating the image. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Amazon EC2 server. 14.7. Adding Amazon EC2 Details to a Compute Profile You can add hardware settings for instances on Amazon EC2 to a compute profile. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click the name of your profile, then click an EC2 connection. From the Flavor list, select the hardware profile on EC2 to use for the host. From the Image list, select the image to use for image-based provisioning. From the Availability zone list, select the target cluster to use within the chosen EC2 region. From the Subnet list, add the subnet for the EC2 instance. If you have a VPC for provisioning new hosts, use its subnet. From the Security Groups list, select the cloud-based access rules for ports and IP addresses to apply to the host. From the Managed IP list, select either a Public IP or a Private IP. Click Submit to save the compute profile. CLI procedure The compute profile CLI commands are not yet implemented in Red Hat Satellite. As an alternative, you can include the same settings directly during the host creation process. 14.8. Creating Image-Based Hosts on Amazon EC2 The Amazon EC2 provisioning process creates hosts from existing images on the Amazon EC2 server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. From the Host Group list, you can select a host group to populate most of the new host's fields. From the Deploy on list, select the EC2 connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings. Click the Interface tab, and then click Edit on the host's interface, and verify that the fields are populated with values. Leave the Mac Address field blank. Satellite Server automatically selects and IP address and the Managed , Primary , and Provision options for the first interface on the host. Click the Operating System tab and confirm that all fields are populated with values. Click the Virtual Machine tab and confirm that all fields are populated with values. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save your changes. This new host entry triggers the Amazon EC2 server to create the instance, using the pre-existing image as a basis for the new volume. CLI procedure Create the host with the hammer host create command and include --provision-method image to use image-based provisioning. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 14.9. Connecting to an Amazon EC2 instance using SSH You can connect remotely to an Amazon EC2 instance from Satellite Server using SSH. However, to connect to any Amazon Web Services EC2 instance that you provision through Red Hat Satellite, you must first access the private key that is associated with the compute resource in the Foreman database, and use this key for authentication. Procedure To locate the compute resource list, on your Satellite Server base system, enter the following command, and note the ID of the compute resource that you want to use: Switch user to the postgres user: Initiate the postgres shell: Connect to the Foreman database as the user postgres : Select the secret from key_pairs where compute_resource_id = 3 : Copy the key from after -----BEGIN RSA PRIVATE KEY----- until -----END RSA PRIVATE KEY----- . Create a .pem file and paste your key into the file: Ensure that you restrict access to the .pem file: To connect to the Amazon EC2 instance, enter the following command: 14.10. Configuring a Finish Template for an Amazon Web Service EC2 Environment You can use Red Hat Satellite finish templates during the provisioning of Red Hat Enterprise Linux instances in an Amazon EC2 environment. If you want to use a Finish template with SSH, Satellite must reside within the EC2 environment and in the correct security group. Satellite currently performs SSH finish provisioning directly, not using Capsule Server. If Satellite Server does not reside within EC2, the EC2 virtual machine reports an internal IP rather than the necessary external IP with which it can be reached. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates . In the Provisioning Templates page, enter Kickstart default finish into the search field and click Search . On the Kickstart default finish template, select Clone . In the Name field, enter a unique name for the template. In the template, prefix each command that requires root privileges with sudo , except for subscription-manager register and yum commands, or add the following line to run the entire template as the sudo user: Click the Association tab, and associate the template with a Red Hat Enterprise Linux operating system that you want to use. Click the Locations tab, and add the the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Make any additional customizations or changes that you require, then click Submit to save your template. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system that you want for your host. Click the Templates tab, and from the Finish Template list, select your finish template. In the Satellite web UI, navigate to Hosts > Create Host and enter the information about the host that you want to create. Click the Parameters tab and navigate to Host parameters . In Host parameters , click the Add Parameter button three times to add three new parameter fields. Add the following three parameters: In the Name field, enter remote_execution_ssh_keys . In the corresponding Value field, enter the output of cat /usr/share/foreman-proxy/.ssh/id_rsa_foreman_proxy.pub . In the Name field, enter remote_execution_ssh_user . In the corresponding Value field, enter ec2-user . In the Name field, enter activation_keys . In the corresponding Value field, enter your activation key. Click Submit to save the changes. 14.11. Deleting a Virtual Machine on Amazon EC2 You can delete virtual machines running on Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Amazon EC2 compute resource while retaining any associated hosts within Satellite. If you want to delete an orphaned host, navigate to Hosts > All Hosts and delete the host manually. 14.12. Uninstalling Amazon EC2 Plugin If you have previously installed the Amazon EC2 plugin but do not use it anymore to manage and deploy hosts to EC2, you can uninstall it from your Satellite Server. Procedure Uninstall the EC2 compute resource provider from your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Amazon EC2 plugin. 14.13. More Information About Amazon Web Services and Satellite For information about how to locate Red Hat Gold Images on Amazon Web Services EC2, see How to Locate Red Hat Cloud Access Gold Images on AWS EC2 . For information about how to install and use the Amazon Web Service Client on Linux, see Install the AWS Command Line Interface on Linux in the Amazon Web Services documentation. For information about importing and exporting virtual machines in Amazon Web Services, see VM Import/Export in the Amazon Web Services documentation.
[ "satellite-installer --enable-foreman-compute-ec2", "hammer compute-resource create --description \"Amazon EC2 Public Cloud` --locations \" My_Location \" --name \" My_EC2_Compute_Resource \" --organizations \" My_Organization \" --password \" My_Secret_Key \" --provider \"EC2\" --region \" My_Region \" --user \" My_User_Name \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_EC2_Compute_Resource \" --name \" My_Amazon_EC2_Image \" --operatingsystem \" My_Operating_System \" --user-data true --username root --uuid \"ami- My_AMI_ID \"", "hammer host create --compute-attributes=\"flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public\" --compute-resource \" My_EC2_Compute_Resource \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_Amazon_EC2_Image \" --interface \"managed=true,primary=true,provision=true,subnet_id=EC2\" --location \" My_Location \" --managed true --name \"My_Amazon_EC2_Host_\" --organization \" My_Organization \" --provision-method image", "hammer compute-resource list", "su - postgres", "psql", "postgres=# \\c foreman", "select secret from key_pairs where compute_resource_id = 3; secret", "vim Keyname .pem", "chmod 600 Keyname .pem", "ssh -i Keyname .pem ec2-user@ example.aws.com", "sudo -s << EOS _Template_ _Body_ EOS", "yum remove -y foreman-ec2 satellite-installer --no-enable-foreman-compute-ec2" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/provisioning_cloud_instances_in_amazon_ec2_provisioning
10.4. Installing virt-who Manually
10.4. Installing virt-who Manually This section will describe how to manually attach the subscription provided by the hypervisor. Procedure 10.2. How to attach a subscription manually List subscription information and find the Pool ID First you need to list the available subscriptions which are of the virtual type. Enter the following command: Note the Pool ID displayed. Copy this ID as you will need it in the step. Attach the subscription with the Pool ID Using the Pool ID you copied in the step run the attach command. Replace the Pool ID XYZ123 with the Pool ID you retrieved. Enter the following command:
[ "subscription-manager list --avail --match-installed | grep 'Virtual' -B12 Subscription Name: Red Hat Enterprise Linux ES (Basic for Virtualization) Provides: Red Hat Beta Oracle Java (for RHEL Server) Red Hat Enterprise Linux Server SKU: ------- Pool ID: XYZ123 Available: 40 Suggested: 1 Service Level: Basic Service Type: L1-L3 Multi-Entitlement: No Ends: 01/02/2017 System Type: Virtual", "subscription-manager attach --pool=XYZ123 Successfully attached a subscription for: Red Hat Enterprise Linux ES (Basic for Virtualization)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/manual-install-virt-who
probe::ioscheduler.elv_add_request.tp
probe::ioscheduler.elv_add_request.tp Name probe::ioscheduler.elv_add_request.tp - tracepoint based probe to indicate a request is added to the request queue. Synopsis Values disk_major Disk major no of request. rq Address of request. q Pointer to request queue. name Name of the probe point elevator_name The type of I/O elevator currently enabled. disk_minor Disk minor number of request. rq_flags Request flags.
[ "ioscheduler.elv_add_request.tp" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-elv-add-request-tp
Chapter 10. SSO protocols
Chapter 10. SSO protocols This section discusses authentication protocols, the Red Hat Single Sign-On authentication server and how applications, secured by the Red Hat Single Sign-On authentication server, interact with these protocols. 10.1. OpenID Connect OpenID Connect (OIDC) is an authentication protocol that is an extension of OAuth 2.0 . OAuth 2.0 is a framework for building authorization protocols and is incomplete. OIDC, however, is a full authentication and authorization protocol that uses the Json Web Token (JWT) standards. The JWT standards define an identity token JSON format and methods to digitally sign and encrypt data in a compact and web-friendly way. In general, OIDC implements two use cases. The first case is an application requesting that a Red Hat Single Sign-On server authenticates a user. Upon successful login, the application receives an identity token and an access token . The identity token contains user information including user name, email, and profile information. The realm digitally signs the access token which contains access information (such as user role mappings) that applications use to determine the resources users can access in the application. The second use case is a client accessing remote services. The client requests an access token from Red Hat Single Sign-On to invoke on remote services on behalf of the user. Red Hat Single Sign-On authenticates the user and asks the user for consent to grant access to the requesting client. The client receives the access token which is digitally signed by the realm. The client makes REST requests on remote services using the access token . The remote REST service extracts the access token . The remote REST service verifies the tokens signature. The remote REST service decides, based on access information within the token, to process or reject the request. 10.1.1. OIDC auth flows OIDC has several methods, or flows, that clients or applications can use to authenticate users and receive identity and access tokens. The method depends on the type of application or client requesting access. 10.1.1.1. Authorization Code Flow The Authorization Code Flow is a browser-based protocol and suits authenticating and authorizing browser-based applications. It uses browser redirects to obtain identity and access tokens. A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat Single Sign-On for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat Single Sign-On uses the parameter upon successful authentication. Red Hat Single Sign-On authenticates the user and creates a one-time, short lived, temporary code. Red Hat Single Sign-On redirects to the application using the callback URL and adds the temporary code as a query parameter in the callback URL. The application extracts the temporary code and makes a background REST invocation to Red Hat Single Sign-On to exchange the code for an identity and access and refresh token. To prevent replay attacks, the temporary code cannot be used more than once. Note A system is vulnerable to a stolen token for the lifetime of that token. For security and scalability reasons, access tokens are generally set to expire quickly so subsequent token requests fail. If a token expires, an application can obtain a new access token using the additional refresh token sent by the login protocol. Confidential clients provide client secrets when they exchange the temporary codes for tokens. Public clients are not required to provide client secrets. Public clients are secure when HTTPS is strictly enforced and redirect URIs registered for the client are strictly controlled. HTML5/JavaScript clients have to be public clients because there is no way to securely transmit the client secret to HTML5/JavaScript clients. For more details, see the Managing Clients chapter. Red Hat Single Sign-On also supports the Proof Key for Code Exchange specification. 10.1.1.2. Implicit Flow The Implicit Flow is a browser-based protocol. It is similar to the Authorization Code Flow but with fewer requests and no refresh tokens. Note The possibility exists of access tokens leaking in the browser history when tokens are transmitted via redirect URIs (see below). Also, this flow does not provide clients with refresh tokens. Therefore, access tokens have to be long-lived or users have to re-authenticate when they expire. We do not advise using this flow. This flow is supported because it is in the OIDC and OAuth 2.0 specification. The protocol works as follows: A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat Single Sign-On for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat Single Sign-On uses the query parameter upon successful authentication. Red Hat Single Sign-On authenticates the user and creates an identity and access token. Red Hat Single Sign-On redirects to the application using the callback URL and additionally adds the identity and access tokens as a query parameter in the callback URL. The application extracts the identity and access tokens from the callback URL. 10.1.1.3. Resource owner password credentials grant (Direct Access Grants) Direct Access Grants are used by REST clients to obtain tokens on behalf of users. It is a HTTP POST request that contains: The credentials of the user. The credentials are sent within form parameters. The id of the client. The clients secret (if it is a confidential client). The HTTP response contains the identity , access , and refresh tokens. 10.1.1.4. Client credentials grant The Client Credentials Grant creates a token based on the metadata and permissions of a service account associated with the client instead of obtaining a token that works on behalf of an external user. Client Credentials Grants are used by REST clients. See the Service Accounts chapter for more information. 10.1.1.5. Device authorization grant This is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. Here's a brief summary of the protocol: The application requests Red Hat Single Sign-On a device code and a user code. Red Hat Single Sign-On creates a device code and a user code. Red Hat Single Sign-On returns a response including the device code and the user code to the application. The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. The application repeatedly polls Red Hat Single Sign-On to find out if the user completed the user authorization. If user authentication is complete, the application exchanges the device code for an identity , access and refresh token. 10.1.1.6. Client initiated backchannel authentication grant This feature is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. Here's a brief summary of the protocol: The client requests Red Hat Single Sign-On an auth_req_id that identifies the authentication request made by the client. Red Hat Single Sign-On creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat Single Sign-On to obtain an Access Token, Refresh Token and ID Token from Red Hat Single Sign-On in return for the auth_req_id until the user is authenticated. An administrator can configure Client Initiated Backchannel Authentication (CIBA) related operations as CIBA Policy per realm. Also please refer to other places of Red Hat Single Sign-On documentation like Backchannel Authentication Endpoint section of Securing Applications and Services Guide and Client Initiated Backchannel Authentication Grant section of Securing Applications and Services Guide. 10.1.1.6.1. CIBA Policy An administrator carries out the following operations on the Admin Console : Open the Authentication CIBA Policy tab. Configure items and click Save . The configurable items and their description follow. Configuration Description Backchannel Token Delivery Mode Specifying how the CD (Consumption Device) gets the authentication result and related tokens. There are three modes, "poll", "ping" and "push". Red Hat Single Sign-On only supports "poll". The default setting is "poll". This configuration is required. For more details, see CIBA Specification . Expires In The expiration time of the "auth_req_id" in seconds since the authentication request was received. The default setting is 120. This configuration is required. For more details, see CIBA Specification . Interval The interval in seconds the CD (Consumption Device) needs to wait for between polling requests to the token endpoint. The default setting is 5. This configuration is optional. For more details, see CIBA Specification . Authentication Requested User Hint The way of identifying the end-user for whom authentication is being requested. The default setting is "login_hint". There are three modes, "login_hint", "login_hint_token" and "id_token_hint". Red Hat Single Sign-On only supports "login_hint". This configuration is required. For more details, see CIBA Specification . 10.1.1.6.2. Provider Setting The CIBA grant uses the following two providers. Authentication Channel Provider : provides the communication between Red Hat Single Sign-On and the entity that actually authenticates the user via AD (Authentication Device). User Resolver Provider : get UserModel of Red Hat Single Sign-On from the information provided by the client to identify the user. Red Hat Single Sign-On has both default providers. However, the administrator needs to set up Authentication Channel Provider like this: <spi name="ciba-auth-channel"> <default-provider>ciba-http-auth-channel</default-provider> <provider name="ciba-http-auth-channel" enabled="true"> <properties> <property name="httpAuthenticationChannelUri" value="https://backend.internal.example.com/auth"/> </properties> </provider> </spi> The configurable items and their description follow. Configuration Description httpAuthenticationChannelUri Specifying URI of the entity that actually authenticates the user via AD (Authentication Device). 10.1.1.6.3. Authentication Channel Provider CIBA standard document does not specify how to authenticate the user by AD. Therefore, it might be implemented at the discretion of products. Red Hat Single Sign-On delegates this authentication to an external authentication entity. To communicate with the authentication entity, Red Hat Single Sign-On provides Authentication Channel Provider. Its implementation of Red Hat Single Sign-On assumes that the authentication entity is under the control of the administrator of Red Hat Single Sign-On so that Red Hat Single Sign-On trusts the authentication entity. It is not recommended to use the authentication entity that the administrator of Red Hat Single Sign-On cannot control. Authentication Channel Provider is provided as SPI provider so that users of Red Hat Single Sign-On can implement their own provider in order to meet their environment. Red Hat Single Sign-On provides its default provider called HTTP Authentication Channel Provider that uses HTTP to communicate with the authentication entity. If a user of Red Hat Single Sign-On user want to use the HTTP Authentication Channel Provider, they need to know its contract between Red Hat Single Sign-On and the authentication entity consisting of the following two parts. Authentication Delegation Request/Response Red Hat Single Sign-On sends an authentication request to the authentication entity. Authentication Result Notification/ACK The authentication entity notifies the result of the authentication to Red Hat Single Sign-On. Authentication Delegation Request/Response consists of the following messaging. Authentication Delegation Request The request is sent from Red Hat Single Sign-On to the authentication entity to ask it for user authentication by AD. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] is used when the authentication entity notifies the result of the authentication to Red Hat Single Sign-On. Parameters Type Name Description Path delegation_reception The endpoint provided by the authentication entity to receive the delegation request Body Name Description login_hint It tells the authentication entity who is authenticated by AD. By default, it is the user's "username". This field is required and was defined by CIBA standard document. scope It tells which scopes the authentication entity gets consent from the authenticated user. This field is required and was defined by CIBA standard document. is_consent_required It shows whether the authentication entity needs to get consent from the authenticated user about the scope. This field is required. binding_message Its value is intended to be shown in both CD and AD's UI to make the user recognize that the authentication by AD is triggered by CD. This field is optional and was defined by CIBA standard document. acr_values It tells the requesting Authentication Context Class Reference from CD. This field is optional and was defined by CIBA standard document. Authentication Delegation Response The response is returned from the authentication entity to Red Hat Single Sign-On to notify that the authentication entity received the authentication request from Red Hat Single Sign-On. Responses HTTP Status Code Description 201 It notifies Red Hat Single Sign-On of receiving the authentication delegation request. Authentication Result Notification/ACK consists of the following messaging. Authentication Result Notification The authentication entity sends the result of the authentication request to Red Hat Single Sign-On. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] must be the one the authentication entity has received from Red Hat Single Sign-On in Authentication Delegation Request. Parameters Type Name Description Path realm The realm name Body Name Description status It tells the result of user authentication by AD. It must be one of the following status. SUCCEED : The authentication by AD has been successfully completed. UNAUTHORIZED : The authentication by AD has not been completed. CANCELLED : The authentication by AD has been cancelled by the user. Authentication Result ACK The response is returned from Red Hat Single Sign-On to the authentication entity to notify Red Hat Single Sign-On received the result of user authentication by AD from the authentication entity. Responses HTTP Status Code Description 200 It notifies the authentication entity of receiving the notification of the authentication result. 10.1.1.6.4. User Resolver Provider Even if the same user, its representation may differ in each CD, Red Hat Single Sign-On and the authentication entity. For CD, Red Hat Single Sign-On and the authentication entity to recognize the same user, this User Resolver Provider converts their own user representations among them. User Resolver Provider is provided as SPI provider so that users of Red Hat Single Sign-On can implement their own provider in order to meet their environment. Red Hat Single Sign-On provides its default provider called Default User Resolver Provider that has the following characteristics. Only support login_hint parameter and is used as default. username of UserModel in Red Hat Single Sign-On is used to represent the user on CD, Red Hat Single Sign-On and the authentication entity. 10.1.2. OIDC Logout OIDC has four specifications relevant to logout mechanisms. These specifications are in draft status: Session Management RP-Initiated Logout Front-Channel Logout Back-Channel Logout Again since all of this is described in the OIDC specification we will only give a brief overview here. 10.1.2.1. Session Management This is a browser-based logout. The application obtains session status information from Red Hat Single Sign-On at a regular basis. When the session is terminated at Red Hat Single Sign-On the application will notice and trigger it's own logout. 10.1.2.2. RP-Initiated Logout This is also a browser-based logout where the logout starts by redirecting the user to a specific endpoint at Red Hat Single Sign-On. This redirect usually happens when the user clicks the Log Out link on the page of some application, which previously used Red Hat Single Sign-On to authenticate the user. Once the user is redirected to the logout endpoint, Red Hat Single Sign-On is going to send logout requests to clients to let them to invalidate their local user sessions, and potentially redirect the user to some URL once the logout process is finished. The user might be optionally requested to confirm the logout in case the id_token_hint parameter was not used. After logout, the user is automatically redirected to the specified post_logout_redirect_uri as long as it is provided as a parameter. Note that you need to include either the client_id or id_token_hint parameter in case the post_logout_redirect_uri is included. Also the post_logout_redirect_uri parameter needs to match one of the Valid Post Logout Redirect URIs specified in the client configuration. Depending on the client configuration, logout requests can be sent to clients through the front-channel or through the back-channel. For the frontend browser clients, which rely on the Session Management described in the section, Red Hat Single Sign-On does not need to send any logout requests to them; these clients automatically detect that SSO session in the browser is logged out. 10.1.2.3. Frontchannel Logout To configure clients to receive logout requests through the front-channel, look at the Front-Channel Logout client setting. When using this method, consider the following: Logout requests sent by Red Hat Single Sign-On to clients rely on the browser and on embedded iframes that are rendered for the logout page. By being based on iframes , front-channel logout might be impacted by Content Security Policies (CSP) and logout requests might be blocked. If the user closes the browser prior to rendering the logout page or before logout requests are actually sent to clients, their sessions at the client might not be invalidated. Note Consider using Back-Channel Logout as it provides a more reliable and secure approach to log out users and terminate their sessions on the clients. If the client is not enabled with front-channel logout, then Red Hat Single Sign-On is going to try first to send logout requests through the back-channel using the Back-Channel Logout URL . If not defined, the server is going to fall back to using the Admin URL . 10.1.2.4. Backchannel Logout This is a non browser-based logout that uses direct backchannel communication between Red Hat Single Sign-On and clients. Red Hat Single Sign-On sends a HTTP POST request containing a logout token to all clients logged into Red Hat Single Sign-On. These requests are sent to a registered backchannel logout URLs at Red Hat Single Sign-On and are supposed to trigger a logout at client side. 10.1.3. Red Hat Single Sign-On server OIDC URI endpoints The following is a list of OIDC endpoints that Red Hat Single Sign-On publishes. These endpoints can be used when a non-Red Hat Single Sign-On client adapter uses OIDC to communicate with the authentication server. They are all relative URLs. The root of the URL consists of the HTTP(S) protocol, hostname, and optionally the path: For example /realms/{realm-name}/protocol/openid-connect/auth Used for obtaining a temporary code in the Authorization Code Flow or obtaining tokens using the Implicit Flow, Direct Grants, or Client Grants. /realms/{realm-name}/protocol/openid-connect/token Used by the Authorization Code Flow to convert a temporary code into a token. /realms/{realm-name}/protocol/openid-connect/logout Used for performing logouts. /realms/{realm-name}/protocol/openid-connect/userinfo Used for the User Info service described in the OIDC specification. /realms/{realm-name}/protocol/openid-connect/revoke Used for OAuth 2.0 Token Revocation described in RFC7009 . /realms/{realm-name}/protocol/openid-connect/certs Used for the JSON Web Key Set (JWKS) containing the public keys used to verify any JSON Web Token (jwks_uri) /realms/{realm-name}/protocol/openid-connect/auth/device Used for Device Authorization Grant to obtain a device code and a user code. /realms/{realm-name}/protocol/openid-connect/ext/ciba/auth This is the URL endpoint for Client Initiated Backchannel Authentication Grant to obtain an auth_req_id that identifies the authentication request made by the client. /realms/{realm-name}/protocol/openid-connect/logout/backchannel-logout This is the URL endpoint for performing backchannel logouts described in the OIDC specification. In all of these, replace {realm-name} with the name of the realm. 10.2. SAML SAML 2.0 is a similar specification to OIDC but more mature. It is descended from SOAP and web service messaging specifications so is generally more verbose than OIDC. SAML 2.0 is an authentication protocol that exchanges XML documents between authentication servers and applications. XML signatures and encryption are used to verify requests and responses. In general, SAML implements two use cases. The first use case is an application that requests the Red Hat Single Sign-On server authenticates a user. Upon successful login, the application will receive an XML document. This document contains an SAML assertion that specifies user attributes. The realm digitally signs the the document which contains access information (such as user role mappings) that applications use to determine the resources users are allowed to access in the application. The second use case is a client accessing remote services. The client requests a SAML assertion from Red Hat Single Sign-On to invoke on remote services on behalf of the user. 10.2.1. SAML bindings Red Hat Single Sign-On supports three binding types. 10.2.1.1. Redirect binding Redirect binding uses a series of browser redirect URIs to exchange information. A user connects to an application using a browser. The application detects the user is not authenticated. The application generates an XML authentication request document and encodes it as a query parameter in a URI. The URI is used to redirect to the Red Hat Single Sign-On server. Depending on your settings, the application can also digitally sign the XML document and include the signature as a query parameter in the redirect URI to Red Hat Single Sign-On. This signature is used to validate the client that sends the request. The browser redirects to Red Hat Single Sign-On. The server extracts the XML auth request document and verifies the digital signature, if required. The user enters their authentication credentials. After authentication, the server generates an XML authentication response document. The document contains a SAML assertion that holds metadata about the user, including name, address, email, and any role mappings the user has. The document is usually digitally signed using XML signatures, and may also be encrypted. The XML authentication response document is encoded as a query parameter in a redirect URI. The URI brings the browser back to the application. The digital signature is also included as a query parameter. The application receives the redirect URI and extracts the XML document. The application verifies the realm's signature to ensure it is receiving a valid authentication response. The information inside the SAML assertion is used to make access decisions or display user data. 10.2.1.2. POST binding POST binding is similar to Redirect binding but POST binding exchanges XML documents using POST requests instead of using GET requests. POST Binding uses JavaScript to make the browser send a POST request to the Red Hat Single Sign-On server or application when exchanging documents. HTTP responds with an HTML document which contains an HTML form containing embedded JavaScript. When the page loads, the JavaScript automatically invokes the form. POST binding is recommended due to two restrictions: Security - With Redirect binding, the SAML response is part of the URL. It is less secure as it is possible to capture the response in logs. Size - Sending the document in the HTTP payload provides more scope for large amounts of data than in a limited URL. 10.2.1.3. ECP Enhanced Client or Proxy (ECP) is a SAML v.2.0 profile which allows the exchange of SAML attributes outside the context of a web browser. It is often used by REST or SOAP-based clients. 10.2.2. Red Hat Single Sign-On Server SAML URI Endpoints Red Hat Single Sign-On has one endpoint for all SAML requests. http(s)://authserver.host/auth/realms/{realm-name}/protocol/saml All bindings use this endpoint. 10.3. OpenID Connect compared to SAML The following lists a number of factors to consider when choosing a protocol. For most purposes, Red Hat Single Sign-On recommends using OIDC. OIDC OIDC is specifically designed to work with the web. OIDC is suited for HTML5/JavaScript applications because it is easier to implement on the client side than SAML. OIDC tokens are in the JSON format which makes them easier for Javascript to consume. OIDC has features to make security implementation easier. For example, see the iframe trick that the specification uses to determine a users login status. SAML SAML is designed as a layer to work on top of the web. SAML can be more verbose than OIDC. Users pick SAML over OIDC because there is a perception that it is mature. Users pick SAML over OIDC existing applications that are secured with it. 10.4. Docker registry v2 authentication Note Docker authentication is disabled by default. To enable docker authentication, see Profiles . Docker Registry V2 Authentication is a protocol, similar to OIDC, that authenticates users against Docker registries. Red Hat Single Sign-On's implementation of this protocol lets Docker clients use a Red Hat Single Sign-On authentication server authenticate against a registry. This protocol uses standard token and signature mechanisms but it does deviate from a true OIDC implementation. It deviates by using a very specific JSON format for requests and responses as well as mapping repository names and permissions to the OAuth scope mechanism. 10.4.1. Docker authentication flow The authentication flow is described in the Docker API documentation . The following is a summary from the perspective of the Red Hat Single Sign-On authentication server: Perform a docker login . The Docker client requests a resource from the Docker registry. If the resource is protected and no authentication token is in the request, the Docker registry server responds with a 401 HTTP message with some information on the permissions that are required and the location of the authorization server. The Docker client constructs an authentication request based on the 401 HTTP message from the Docker registry. The client uses the locally cached credentials (from the docker login command) as part of the HTTP Basic Authentication request to the Red Hat Single Sign-On authentication server. The Red Hat Single Sign-On authentication server attempts to authenticate the user and return a JSON body containing an OAuth-style Bearer token. The Docker client receives a bearer token from the JSON response and uses it in the authorization header to request the protected resource. The Docker registry receives the new request for the protected resource with the token from the Red Hat Single Sign-On server. The registry validates the token and grants access to the requested resource (if appropriate). Note Red Hat Single Sign-On does not create a browser SSO session after successful authentication with the Docker protocol. The browser SSO session does not use the Docker protocol as it cannot refresh tokens or obtain the status of a token or session from the Red Hat Single Sign-On server; therefore a browser SSO session is not necessary. For more details, see the transient session section. 10.4.2. Red Hat Single Sign-On Docker Registry v2 Authentication Server URI Endpoints Red Hat Single Sign-On has one endpoint for all Docker auth v2 requests. http(s)://authserver.host/auth/realms/{realm-name}/protocol/docker-v2
[ "<spi name=\"ciba-auth-channel\"> <default-provider>ciba-http-auth-channel</default-provider> <provider name=\"ciba-http-auth-channel\" enabled=\"true\"> <properties> <property name=\"httpAuthenticationChannelUri\" value=\"https://backend.internal.example.com/auth\"/> </properties> </provider> </spi>", "POST [delegation_reception]", "POST /auth/realms/[realm]/protocol/openid-connect/ext/ciba/auth/callback", "https://localhost:8080/auth" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/sso_protocols
Chapter 3. Understanding update channels and releases
Chapter 3. Understanding update channels and releases Update channels are the mechanism by which users declare the OpenShift Container Platform minor version they intend to update their clusters to. They also allow users to choose the timing and level of support their updates will have through the fast , stable , candidate , and eus channel options. The Cluster Version Operator uses an update graph based on the channel declaration, along with other conditional information, to provide a list of recommended and conditional updates available to the cluster. Update channels correspond to a minor version of OpenShift Container Platform. The version number in the channel represents the target minor version that the cluster will eventually be updated to, even if it is higher than the cluster's current minor version. For instance, OpenShift Container Platform 4.10 update channels provide the following recommendations: Updates within 4.10. Updates within 4.9. Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do not immediately meet the minimum z-stream version requirements. eus-4.10 only: updates within 4.8. eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to 4.10. 4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that administrators must explicitly decide to update to the minor version of OpenShift Container Platform. Update channels control only release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.13 offers the following update channels: stable-4.13 eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions) fast-4.13 candidate-4.13 If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service. Warning Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 3.1. Update channels 3.1.1. fast-4.13 channel The fast-4.13 channel is updated with new versions of OpenShift Container Platform 4.13 as soon as Red Hat declares the version as a general availability (GA) release. As such, these releases are fully supported and purposed to be used in production environments. 3.1.2. stable-4.13 channel While the fast-4.13 channel contains releases as soon as their errata are published, releases are added to the stable-4.13 channel after a delay. During this delay, data is collected from multiple sources and analyzed for indications of product regressions. Once a significant number of data points have been collected, these releases are added to the stable channel. Note Since the time required to obtain a significant number of data points varies based on many factors, Service LeveL Objective (SLO) is not offered for the delay duration between fast and stable channels. For more information, please see "Choosing the correct channel for your cluster" Newly installed clusters default to using stable channels. 3.1.3. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer Extended Update Support (EUS). Releases promoted to the stable channel are also simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a convenience for clusters performing an EUS-to-EUS update. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. Important EUS channels are the only channels that receive additional z-streams while a release is in the EUS phase. 3.1.4. candidate-4.13 channel The candidate-4.13 channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels may not contain the full feature set of eventual GA releases or features may be removed prior to GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. 3.1.5. Update recommendations in the channel OpenShift Container Platform maintains an update recommendation service that knows your installed OpenShift Container Platform version and the path to take within the channel to get you to the release. Update paths are also limited to versions relevant to your currently selected channel and its promotion characteristics. You can imagine seeing the following releases in your channel: 4.13.0 4.13.1 4.13.3 4.13.4 The service recommends only updates that have been tested and have no known serious regressions. For example, if your cluster is on 4.13.1 and OpenShift Container Platform suggests 4.13.4, then it is recommended to update from 4.13.1 to 4.13.4. Important Do not rely on consecutive patch numbers. In this example, 4.13.2 is not and never was available in the channel, therefore updates to 4.13.2 are not recommended or supported. 3.1.6. Update recommendations and Conditional Updates Red Hat monitors newly released versions and update paths associated with those versions before and after they are added to supported channels. If Red Hat removes update recommendations from any supported release, a superseding update recommendation will be provided to a future version that corrects the regression. There may however be a delay while the defect is corrected, tested, and promoted to your selected channel. Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only clusters matching certain conditions. Some examples include having the Platform set to None or the CNI provider set to OpenShiftSDN . The Cluster Version Operator (CVO) continually evaluates known risks against the current cluster state. If no risks match, the update is recommended. If the risk matches, those updates are supported but not recommended, and a reference link is provided. The reference link helps the cluster admin decide if they would like to accept the risk and update anyway. When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels simultaneously. Declaration of a Conditional Update risk may happen either before or after the update has been promoted to supported channels. 3.1.7. Choosing the correct channel for your cluster Choosing the appropriate channel involves two decisions. First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the version, or the EUS version. Note Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single EUS-to-EUS update are not offered. Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the fast-4.13 and stable-4.13 are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans. Additionally, there are several factors which may lead an organization to move clusters to the fast channel either permanently or temporarily including: The desire to apply a specific fix known to affect your environment without delay. Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion delays still apply to z-streams with CVE fixes. Internal testing processes. If it takes your organization several weeks to qualify releases it is best test concurrently with our promotion process rather than waiting. This also assures that any telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can be fixed faster. 3.1.8. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During an update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 3.1.9. Switching between channels A channel can be switched from the web console or through the adm upgrade channel command: USD oc adm upgrade channel <channel> The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.13 channel to the fast-4.13 channel. You can switch to the candidate-4.13 channel at any time, but some releases for this channel might be unsupported. You can switch from the candidate-4.13 channel to the fast-4.13 channel if your current release is a general availability release. You can always switch from the fast-4.13 channel to the stable-4.13 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.13 if the current release was recently promoted. Additional resources Updating along a conditional upgrade path Choosing the correct channel for your cluster
[ "oc adm upgrade channel <channel>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/understanding-upgrade-channels-releases
Chapter 15. Using 2-way replication with CephFS
Chapter 15. Using 2-way replication with CephFS To reduce storage overhead with CephFS when data resiliency is not a primary concern, you can opt for using 2-way replication (replica-2). This reduces the amount of storage space used and decreases the level of fault tolerance. There are two ways to use replica-2 for CephFS: Edit the existing default pool to replica-2 and use it with the default CephFS storageclass . Add an additional CephFS data pool with replica-2 . 15.1. Editing the existing default CephFS data pool to replica-2 Use this procedure to edit the existing default CephFS pool to replica-2 and use it with the default CephFS storageclass. Procedure Patch the storagecluster to change default CephFS data pool to replica-2. Check the pool details. 15.2. Adding an additional CephFS data pool with replica-2 Use this procedure to add an additional CephFS data pool with replica-2. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses Create Storage Class . Select CephFS Provisioner . Under Storage Pool , click Create new storage pool . Fill in the Create Storage Pool fields. Under Data protection policy , select 2-way Replication . Confirm Storage Pool creation In the Storage Class creation form, choose the newly created Storage Pool. Confirm the Storage Class creation. Verification Click Storage Data Foundation . In the Storage systems tab, select the new storage system. The Details tab of the storage system reflect the correct volume and device types you chose during creation
[ "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephFilesystems/dataPoolSpec/replicated/size\", \"value\": 2 }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched", "oc get cephfilesystem ocs-storagecluster-cephfilesystem -o=jsonpath='{.spec.dataPools}' | jq [ { \"application\": \"\", \"deviceClass\": \"ssd\", \"erasureCoded\": { \"codingChunks\": 0, \"dataChunks\": 0 }, \"failureDomain\": \"zone\", \"mirroring\": {}, \"quotas\": {}, \"replicated\": { \"replicasPerFailureDomain\": 1, \"size\": 2, \"targetSizeRatio\": 0.49 }, \"statusCheck\": { \"mirror\": {} } } ]", "ceph osd pool ls | grep filesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/using-2-way-replication-with-cephfs_rhodf
Chapter 4. Configuring a Red Hat High Availability Cluster on Google Cloud Platform
Chapter 4. Configuring a Red Hat High Availability Cluster on Google Cloud Platform To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including Google Cloud Platform (GCP). Creating RHEL HA clusters on GCP is similar to creating HA clusters in non-cloud environments, with certain specifics. To configure a Red Hat HA cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes, see the following sections. These provide information on: Prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure VM instances. Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing network resource agents. Prerequisites Red Hat Enterprise Linux 8 Server: rhel-8-server-rpms/8Server/x86_64 Red Hat Enterprise Linux 8 Server (High Availability): rhel-8-server-ha-rpms/8Server/x86_64 You must belong to an active GCP project and have sufficient permissions to create resources in the project. Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account. If you or your project administrator create a custom service account, the service account should be configured for the following roles. Cloud Trace Agent Compute Admin Compute Network Admin Cloud Datastore User Logging Admin Monitoring Editor Monitoring Metric Writer Service Account Administrator Storage Admin 4.1. The benefits of using high-availability clusters on public cloud platforms A high-availability (HA) cluster is a set of computers (called nodes ) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster. You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits: Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted. Scalability: Additional nodes can be started when demand is high and stopped when demand is low. Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running. Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier. To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools. Additional resources High Availability Add-On overview 4.2. Required system packages To create and configure a base image of RHEL, your host system must have the following packages installed. Table 4.1. System packages Package Repository Description libvirt rhel-8-for-x86_64-appstream-rpms Open source API, daemon, and management tool for managing platform virtualization virt-install rhel-8-for-x86_64-appstream-rpms A command-line utility for building VMs libguestfs rhel-8-for-x86_64-appstream-rpms A library for accessing and modifying VM file systems libguestfs-tools rhel-8-for-x86_64-appstream-rpms System administration tools for VMs; includes the guestfish utility 4.3. Red Hat Enterprise Linux image options on GCP You can use multiple types of images for deploying RHEL 8 on Google Cloud Platform. Based on your requirements, consider which option is optimal for your use case. Table 4.2. Image options Image option Subscriptions Sample scenario Considerations Deploy a Red Hat Gold Image. Use your existing Red Hat subscriptions. Select a Red Hat Gold Image on Google Cloud Platform. For details on Gold Images and how to access them on Google Cloud Platform, see the Red Hat Cloud Access Reference Guide . The subscription includes the Red Hat product cost; you pay Google for all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy a custom image that you move to GCP. Use your existing Red Hat subscriptions. Upload your custom image and attach your subscriptions. The subscription includes the Red Hat product cost; you pay all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy an existing GCP image that includes RHEL. The GCP images include a Red Hat product. Choose a RHEL image when you launch an instance on the GCP Compute Engine , or choose an image from the Google Cloud Platform Marketplace . You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. Note You can create a custom image for GCP by using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information. Important You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image: Create a new custom RHEL instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. Additional resources Red Hat in the Public Cloud Compute Engine images Creating an instance from a custom image 4.4. Installing the Google Cloud SDK Many of the procedures to manage HA clusters on Google Cloud Platform (GCP) require the tools in the Google Cloud SDK. Procedure Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details. Follow the same instructions for initializing the Google Cloud SDK. Note Once you have initialized the Google Cloud SDK, you can use the gcloud CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with the gcloud compute project-info describe --project <project-name> command. Additional resources Quickstart for Linux gcloud command reference gcloud command-line tool overview 4.5. Creating a GCP image bucket The following document includes the minimum requirements for creating a multi-regional bucket in your default location. Prerequisites GCP storage utility (gsutil) Procedure If you are not already logged in to Google Cloud Platform, log in with the following command. Create a storage bucket. Example: Additional resources Make buckets 4.6. Creating a custom virtual private cloud network and subnet A custom virtual private cloud (VPC) network and subnet are required for a cluster to be configured with a High Availability (HA) function. Procedure Launch the GCP Console. Select VPC networks under Networking in the left navigation pane. Click Create VPC Network . Enter a name for the VPC network. Under the New subnet , create a Custom subnet in the region where you want to create the cluster. Click Create . 4.7. Preparing and importing a base GCP image Before a local RHEL 8 image can be deployed in GCP, you must first convert and upload the image to your GCP Bucket. Procedure Convert the file. Images uploaded to GCP must be in raw format and named disk.raw . Compress the raw file. Images uploaded to GCP must be compressed. Import the compressed image to the bucket created earlier. 4.8. Creating and configuring a base GCP instance To create and configure a GCP instance that complies with GCP operating and security requirements, complete the following steps. Procedure Create an image from the compressed file in the bucket. Example: Create a template instance from the image. The minimum size required for a base RHEL instance is n1-standard-2. See gcloud compute instances create for additional configuration options. Example: Connect to the instance with an SSH terminal session. Update the RHEL software. Register with Red Hat Subscription Manager (RHSM). Enable a Subscription Pool ID (or use the --auto-attach command). Disable all repositories. Enable the following repository. Run the yum update command. Install the GCP Linux Guest Environment on the running instance (in-place installation). See Install the guest environment in-place for instructions. Select the CentOS/RHEL option. Copy the command script and paste it at the command prompt to run the script immediately. Make the following configuration changes to the instance. These changes are based on GCP recommendations for custom images. See gcloudcompute images list for more information. Edit the /etc/chrony.conf file and remove all NTP servers. Add the following NTP server. Remove any persistent network device rules. Set the network service to start automatically. Set the sshd service to start automatically. Set the time zone to UTC. Optional: Edit the /etc/ssh/ssh_config file and add the following lines to the end of the file. This keeps your SSH session active during longer periods of inactivity. Edit the /etc/ssh/sshd_config file and make the following changes, if necessary. The ClientAliveInterval 420 setting is optional; this keeps your SSH session active during longer periods of inactivity. Disable password access. Important Previously, you enabled password access to allow SSH session access to configure the instance. You must disable password access. All SSH session access must be passwordless. Unregister the instance from the subscription manager. Clean the shell history. Keep the instance running for the procedure. 4.9. Creating a snapshot image To preserve the configuration and disk data of a GCP HA instance, create a snapshot of it. Procedure On the running instance, synchronize data to disk. On your host system, create the snapshot. On your host system, create the configured image from the snapshot. Additional resources Creating Persistent Disk Snapshots 4.10. Creating an HA node template instance and HA nodes After you have configured an image from the snapshot, you can create a node template. Then, you can use this template to create all HA nodes. Procedure Create an instance template. Example: Create multiple nodes in one zone. Example: 4.11. Installing HA packages and agents On each of your nodes, you need to install the High Availability packages and agents to be able to configure a Red Hat High Availability cluster on Google Cloud Platform (GCP). Procedure In the Google Cloud Console, select Compute Engine and then select VM instances . Select the instance, click the arrow to SSH , and select the View gcloud command option. Paste this command at a command prompt for passwordless access to the instance. Enable sudo account access and register with Red Hat Subscription Manager. Enable a Subscription Pool ID (or use the --auto-attach command). Disable all repositories. Enable the following repositories. Install pcs pacemaker , the fence agents, and the resource agents. Update all packages. 4.12. Configuring HA services On each of your nodes, configure the HA services. Procedure The user hacluster was created during the pcs and pacemaker installation in the step. Create a password for the user hacluster on all cluster nodes. Use the same password for all nodes. If the firewalld service is installed, add the HA service. Start the pcs service and enable it to start on boot. Verification Ensure the pcsd service is running. Edit the /etc/hosts file. Add RHEL host names and internal IP addresses for all nodes. Additional resources How should the /etc/hosts file be set up on RHEL cluster nodes? (Red Hat Knowledgebase) 4.13. Creating a cluster To convert multiple nodes into a cluster, use the following steps. Procedure On one of the nodes, authenticate the pcs user. Specify the name of each node in the cluster in the command. Create the cluster. Verification Run the following command to enable nodes to join the cluster automatically when started. Start the cluster. 4.14. Creating a fencing device High Availability (HA) environments require a fencing device, which ensures that malfunctioning nodes are isolated and the cluster remains available if an outage occurs. Note that for most default configurations, the GCP instance names and the RHEL host names are identical. Procedure Obtain GCP instance names. Note that the output of the following command also shows the internal ID for the instance. Example: Create a fence device. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device . Verification Verify that the fence devices started. Example: 4.15. Configuring GCP node authorization Configure cloud SDK tools to use your account credentials to access GCP. Procedure Enter the following command on each node to initialize each node with your project ID and account credentials. 4.16. Configuring the gcp-vcp-move-vip resource agent The gcp-vpc-move-vip resource agent attaches a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster. To show more information about this resource: You can configure the resource agent to use a primary subnet address range or a secondary subnet address range: Primary subnet address range Complete the following steps to configure the resource for the primary VPC subnet. Procedure Create the aliasip resource. Include an unused internal IP address. Include the CIDR block in the command. Example: Create an IPaddr2 resource for managing the IP on the node. Example: Group the network resources under vipgrp . Verification Verify that the resources have started and are grouped under vipgrp . Verify that the resource can move to a different node. Example: Verify that the vip successfully started on a different node. Secondary subnet address range Complete the following steps to configure the resource for a secondary subnet address range. Prerequisites You have created a custom network and a subnet Procedure Create a secondary subnet address range. Example: Create the aliasip resource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command. Example: Create an IPaddr2 resource for managing the IP on the node. Example: Group the network resources under vipgrp . Verification Verify that the resources have started and are grouped under vipgrp . Verify that the resource can move to a different node. Example: Verify that the vip successfully started on a different node. 4.17. Additional resources Support Policies for RHEL High Availability clusters - Transport Protocols VPC network overview Exploring RHEL High Availability's Components, Concepts, and Features - Overview of Transport Protocols Design Guidance for RHEL High Availability Clusters - Selecting the Transport Protocol
[ "gcloud auth login", "gsutil mb gs:// BucketName", "gsutil mb gs://rhel-ha-bucket", "qemu-img convert -f qcow2 ImageName .qcow2 -O raw disk.raw", "tar -Sczf ImageName .tar.gz disk.raw", "gsutil cp ImageName .tar.gz gs:// BucketName", "gcloud compute images create BaseImageName --source-uri gs:// BucketName / BaseImageName .tar.gz", "[admin@localhost ~] USD gcloud compute images create rhel-76-server --source-uri gs://user-rhelha/rhel-server-76.tar.gz Created [https://www.googleapis.com/compute/v1/projects/MyProject/global/images/rhel-server-76]. NAME PROJECT FAMILY DEPRECATED STATUS rhel-76-server rhel-ha-testing-on-gcp READY", "gcloud compute instances create BaseInstanceName --can-ip-forward --machine-type n1-standard-2 --image BaseImageName --service-account ServiceAccountEmail", "[admin@localhost ~] USD gcloud compute instances create rhel-76-server-base-instance --can-ip-forward --machine-type n1-standard-2 --image rhel-76-server --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/rhel-ha-testing-on-gcp/zones/us-east1-b/instances/rhel-76-server-base-instance]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel-76-server-base-instance us-east1-bn1-standard-2 10.10.10.3 192.227.54.211 RUNNING", "ssh root@PublicIPaddress", "subscription-manager repos --disable= *", "subscription-manager repos --enable=rhel-8-server-rpms", "yum update -y", "metadata.google.internal iburst Google NTP server", "rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules", "chkconfig network on", "systemctl enable sshd systemctl is-enabled sshd", "ln -sf /usr/share/zoneinfo/UTC /etc/localtime", "Server times out connections after several minutes of inactivity. Keep alive ssh connections by sending a packet every 7 minutes. ServerAliveInterval 420", "PermitRootLogin no PasswordAuthentication no AllowTcpForwarding yes X11Forwarding no PermitTunnel no Compute times out connections after 10 minutes of inactivity. Keep ssh connections alive by sending a packet every 7 minutes. ClientAliveInterval 420", "ssh_pwauth from 1 to 0. ssh_pwauth: 0", "subscription-manager unregister", "export HISTSIZE=0", "sync", "gcloud compute disks snapshot InstanceName --snapshot-names SnapshotName", "gcloud compute images create ConfiguredImageFromSnapshot --source-snapshot SnapshotName", "gcloud compute instance-templates create InstanceTemplateName --can-ip-forward --machine-type n1-standard-2 --image ConfiguredImageFromSnapshot --service-account ServiceAccountEmailAddress", "[admin@localhost ~] USD gcloud compute instance-templates create rhel-81-instance-template --can-ip-forward --machine-type n1-standard-2 --image rhel-81-gcp-image --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/global/instanceTemplates/rhel-81-instance-template]. NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP rhel-81-instance-template n1-standard-2 2018-07-25T11:09:30.506-07:00", "gcloud compute instances create NodeName01 NodeName02 --source-instance-template InstanceTemplateName --zone RegionZone --network= NetworkName --subnet= SubnetName", "[admin@localhost ~] USD gcloud compute instances create rhel81-node-01 rhel81-node-02 rhel81-node-03 --source-instance-template rhel-81-instance-template --zone us-west1-b --network=projectVPC --subnet=range0 Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-01]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-02]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-03]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel81-node-01 us-west1-b n1-standard-2 10.10.10.4 192.230.25.81 RUNNING rhel81-node-02 us-west1-b n1-standard-2 10.10.10.5 192.230.81.253 RUNNING rhel81-node-03 us-east1-b n1-standard-2 10.10.10.6 192.230.102.15 RUNNING", "subscription-manager repos --disable= *", "subscription-manager repos --enable=rhel-8-server-rpms subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp", "yum update -y", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl start pcsd.service systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.", "systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-06-25 19:21:42 UTC; 15s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5901 (pcsd) CGroup: /system.slice/pcsd.service └─5901 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &", "pcs host auth hostname1 hostname2 hostname3 Username: hacluster Password: hostname1 : Authorized hostname2 : Authorized hostname3 : Authorized", "pcs cluster setup cluster-name hostname1 hostname2 hostname3", "pcs cluster enable --all", "pcs cluster start --all", "fence_gce --zone us-west1-b --project=rhel-ha-on-gcp -o list", "fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 4435801234567893181,InstanceName-3 4081901234567896811,InstanceName-1 7173601234567893341,InstanceName-2", "pcs stonith create FenceDeviceName fence_gce zone= Region-Zone project= MyProject", "pcs status", "pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel81-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel81-node-01 3 nodes configured 3 resources configured Online: [ rhel81-node-01 rhel81-node-02 rhel81-node-03 ] Full list of resources: us-west1-b-fence (stonith:fence_gce): Started rhel81-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled", "gcloud-ra init", "pcs resource describe gcp-vpc-move-vip", "pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock", "pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.10.200/32", "pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32", "pcs resource create vip IPaddr2 nic=eth0 ip=10.10.10.200 cidr_netmask=32", "pcs resource group add vipgrp aliasip vip", "pcs status", "pcs resource move vip Node", "pcs resource move vip rhel81-node-03", "pcs status", "gcloud-ra compute networks subnets update SubnetName --region RegionName --add-secondary-ranges SecondarySubnetName = SecondarySubnetRange", "gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24", "pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock", "pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.20.200/32", "pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32", "pcs resource create vip IPaddr2 nic=eth0 ip=10.10.20.200 cidr_netmask=32", "pcs resource group add vipgrp aliasip vip", "pcs status", "pcs resource move vip Node", "pcs resource move vip rhel81-node-03", "pcs status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_google_cloud_platform/configuring-rhel-ha-on-gcp_cloud-content-gcp
Chapter 2. Installing HawtIO
Chapter 2. Installing HawtIO There are several options to start using the HawtIO console: Running HawtIO standalone (in detached mode) from CLI (JBang) Running HawtIO embedded in a Quarkus app Running HawtIO embedded in a Spring Boot app 2.1. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite: You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure: In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 2.2. Running from CLI (JBang) You can install and run HawtIO from CLI using JBang. Note If you don't have JBang locally yet, first install it: https://www.jbang.dev/download/ Procedure: Install the latest HawtIO on your machine using the jbang command: USD jbang app install -Dhawtio.jbang.version=4.0.0.redhat-00040 hawtio@hawtio/hawtio Note This installation method is available only with jbang>=0.115.0 . It will install the HawtIO command. Launch a HawtIO instance with the following command: USD hawtio The command will automatically open the console at http://0.0.0.0:8080/hawtio/ . To change the port number, run the following command: USD hawtio --port 8090 For more information on the configuration options of the CLI, run the following code: USD hawtio --help Usage: hawtio [-hjoV] [-c=<contextPath>] [-d=<plugins>] [-e=<extraClassPath>] [-H=<host>] [-k=<keyStore>] [-l=<warLocation>] [-p=<port>] [-s=<keyStorePass>] [-w=<war>] Run Hawtio -c, --context-path=<contextPath> Context path. -d, --plugins-dir=<plugins> Directory to search for .war files to install as 3rd party plugins. -e, --extra-class-path=<extraClassPath> Extra class path. -h, --help Print usage help and exit. -H, --host=<host> Hostname to listen to. -j, --join Join server thread. -k, --key-store=<keyStore> JKS keyStore with the keys for https. -l, --war-location=<warLocation> Directory to search for .war files. -o, --open-url Open the web console automatic in the web browser. -p, --port=<port> Port number. -s, --key-store-pass=<keyStorePass> Password for the JKS keyStore with the keys for https. -V, --version Print Hawtio version -w, --war=<war> War file or directory of the hawtio web application. 2.3. Running a Quarkus app You can attach HawtIO to your Quarkus application in a single step. Procedure: Add io.hawt:hawtio-quarkus and the supporting Camel Quarkus extensions to the dependencies in pom.xml : <dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.0.0.redhat-00040</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> <!-- ... other BOMs or dependencies ... --> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-quarkus</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / Hawtio --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-management-starter</artifactId> </dependency> <!-- (Optional) Required for Hawtio Camel route diagram tab --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies> Run HawtIO with your Quarkus application in development mode as follows: mvn compile quarkus:dev Open http://localhost:8080/hawtio to view the HawtIO console. 2.4. Running a Spring Boot app You can attach HawtIO to your Spring Boot application in two steps. Procedure: Add io.hawt:hawtio-springboot and the supporting Camel Spring Boot starters to the dependencies in pom.xml : <dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.0.0.redhat-00040</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / Hawtio --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-management-starter</artifactId> </dependency> <!-- (Optional) Required for Hawtio Camel route diagram tab --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-xml-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies> Enable the HawtIO and Jolokia endpoints by adding the following lines to application.properties : spring.jmx.enabled = true management.endpoints.web.exposure.include = hawtio,jolokia Run HawtIO with your Spring Boot application in development mode as follows: mvn spring-boot:run Open http://localhost:8080/actuator/hawtio to view the HawtIO console. 2.4.1. Configuring HawtIO path If you don't prefer to have the /actuator base path for the HawtIO endpoint, you can also execute the following: Customize the Spring Boot management base path with the management.endpoints.web.base-path property: management.endpoints.web.base-path = / You can also customize the path to the HawtIO endpoint by setting the management.endpoints.web.path-mapping.hawtio property: management.endpoints.web.path-mapping.hawtio = hawtio/console Example: There is a working Spring Boot example that shows how to monitor a web application that exposes information about Apache Camel routes, metrics, etc. with HawtIO Spring Boot example . A good MBean for real-time values and charts is java.lang/OperatingSystem . Try looking at Camel routes. Notice that as you change selections in the tree the list of tabs available changes dynamically based on the content.
[ "<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>", "jbang app install -Dhawtio.jbang.version=4.0.0.redhat-00040 hawtio@hawtio/hawtio", "hawtio", "hawtio --port 8090", "hawtio --help Usage: hawtio [-hjoV] [-c=<contextPath>] [-d=<plugins>] [-e=<extraClassPath>] [-H=<host>] [-k=<keyStore>] [-l=<warLocation>] [-p=<port>] [-s=<keyStorePass>] [-w=<war>] Run Hawtio -c, --context-path=<contextPath> Context path. -d, --plugins-dir=<plugins> Directory to search for .war files to install as 3rd party plugins. -e, --extra-class-path=<extraClassPath> Extra class path. -h, --help Print usage help and exit. -H, --host=<host> Hostname to listen to. -j, --join Join server thread. -k, --key-store=<keyStore> JKS keyStore with the keys for https. -l, --war-location=<warLocation> Directory to search for .war files. -o, --open-url Open the web console automatic in the web browser. -p, --port=<port> Port number. -s, --key-store-pass=<keyStorePass> Password for the JKS keyStore with the keys for https. -V, --version Print Hawtio version -w, --war=<war> War file or directory of the hawtio web application.", "<dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.0.0.redhat-00040</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> <!-- ... other BOMs or dependencies ... --> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-quarkus</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / Hawtio --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-management-starter</artifactId> </dependency> <!-- (Optional) Required for Hawtio Camel route diagram tab --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>", "mvn compile quarkus:dev", "<dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-bom</artifactId> <version>4.0.0.redhat-00040</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot</artifactId> </dependency> <!-- Mandatory for enabling Camel management via JMX / Hawtio --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-management-starter</artifactId> </dependency> <!-- (Optional) Required for Hawtio Camel route diagram tab --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-xml-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>", "spring.jmx.enabled = true management.endpoints.web.exposure.include = hawtio,jolokia", "mvn spring-boot:run", "management.endpoints.web.base-path = /", "management.endpoints.web.path-mapping.hawtio = hawtio/console" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/installing-hawtio
Chapter 4. Installing a cluster on Nutanix in a disconnected environment
Chapter 4. Installing a cluster on Nutanix in a disconnected environment In OpenShift Container Platform 4.18, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 4.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 4.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 4.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 4.6.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the one or more UUIDs of the Prism Element subnet objects. Among them, one of the subnet's IP address prefixes (CIDRs) must contain the virtual IP addresses that the OpenShift Container Platform cluster uses. A maximum of 32 subnets for each failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. All subnetUUID values must be unique. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Post installation Complete the following steps to complete the configuration of your cluster. 4.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) Classic to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. Note OLM v1 uses the ClusterCatalog resource to retrieve information about the available cluster extensions in the mirror registry. The oc-mirror plugin v1 does not generate ClusterCatalog resources automatically; you must manually create them. The oc-mirror plugin v2 does, however, generate ClusterCatalog resources automatically. For more information on creating and applying ClusterCatalog resources, see "Adding a catalog to a cluster" in "Extensions". After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces Additional resources Adding a catalog to a cluster in "Extensions" 4.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 4.12. Additional resources About remote health monitoring 4.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>", "apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3", "apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned
Release notes for Red Hat build of OpenJDK 11.0.20
Release notes for Red Hat build of OpenJDK 11.0.20 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/index
Chapter 4. Deploying Red Hat Quay on premise
Chapter 4. Deploying Red Hat Quay on premise The following image shows examples for on premise configuration, for the following types of deployments: Standalone Proof of Concept Highly available deployment on multiple hosts Deployment on an OpenShift Container Platform cluster by using the Red Hat Quay Operator On premise example configurations 4.1. Red Hat Quay example deployments The following image shows three possible deployments for Red Hat Quay: Deployment examples Proof of Concept Running Red Hat Quay, Clair, and mirroring on a single node, with local image storage and local database Single data center Running highly available Red Hat Quay, Clair ,and mirroring, on multiple nodes, with HA database and image storage Multiple data centers Running highly available Red Hat Quay, Clair, and mirroring, on multiple nodes in multiple data centers, with HA database and image storage 4.2. Red Hat Quay deployment topology The following image provides a high level overview of a Red Hat Quay deployment topology: Red Hat Quay deployment topology In this deployment, all pushes, user interface, and API requests are received by public Red Hat Quay endpoints. Pulls are served directly from object storage . 4.3. Red Hat Quay deployment topology with storage proxy The following image provides a high level overview of a Red Hat Quay deployment topology with storage proxy configured: Red Hat Quay deployment topology with storage proxy With storage proxy configured, all traffic passes through the public Red Hat Quay endpoint.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_architecture/sample-quay-on-prem-intro
Preface
Preface For OpenShift Data Foundation, node replacement can be performed proactively for an operational node and reactively for a failed node for the following deployments: For Amazon Web Services (AWS) User-provisioned infrastructure Installer-provisioned infrastructure For VMware User-provisioned infrastructure Installer-provisioned infrastructure For Microsoft Azure Installer-provisioned infrastructure For local storage devices Bare metal VMware IBM Power For replacing your storage nodes in external mode, see Red Hat Ceph Storage documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_nodes/preface-replacing-nodes
Appendix F. Swift response headers
Appendix F. Swift response headers The response from the server should include an X-Auth-Token value. The response might also contain a X-Storage-Url that provides the API_VERSION / ACCOUNT prefix that is specified in other requests throughout the API documentation. Table F.1. Response Headers Name Description Type X-Storage-Token The authorization token for the X-Auth-User specified in the request. String X-Storage-Url The URL and API_VERSION / ACCOUNT path for the user. String
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/developer_guide/swift-response-headers_dev
5.254. pulseaudio
5.254. pulseaudio 5.254.1. RHBA-2012:1070 - pulseaudio bug fix update Updated pulseaudio packages that fix one bug are now available for Red Hat Enterprise Linux 6. PulseAudio is a sound server for Linux and other Unix like operating systems. Bug Fix BZ# 836139 On certain sound card models by Creative Labs, the S/PDIF Optical Raw output was enabled on boot regardless of the settings. This caused the audio output on the analog duplex output to be disabled. With this update, the S/PDIF Optical Raw output is disabled on boot so that the analog output works as expected. All users of pulseaudio are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/pulseaudio
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/making-open-source-more-inclusive
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/pr01
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-04-29 12:48:40 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/kafka_configuration_properties/using_your_subscription
Chapter 3. Preparing for server loss with replication
Chapter 3. Preparing for server loss with replication Follow these guidelines to establish a replication topology that will allow you to respond to losing a server. 3.1. Guidelines for connecting IdM replicas in a topology Connect each replica to at least two other replicas This ensures that information is replicated not just between the initial replica and the first server you installed, but between other replicas as well. Connect a replica to a maximum of four other replicas (not a hard requirement) A large number of replication agreements per server does not add significant benefits. A receiving replica can only be updated by one other replica at a time and meanwhile, the other replication agreements are idle. More than four replication agreements per replica typically means a waste of resources. Note This recommendation applies to both certificate replication and domain replication agreements. There are two exceptions to the limit of four replication agreements per replica: You want failover paths if certain replicas are not online or responding. In larger deployments, you want additional direct links between specific nodes. Configuring a high number of replication agreements can have a negative impact on overall performance: when multiple replication agreements in the topology are sending updates, certain replicas can experience a high contention on the changelog database file between incoming updates and the outgoing updates. If you decide to use more replication agreements per replica, ensure that you do not experience replication issues and latency. However, note that large distances and high numbers of intermediate nodes can also cause latency problems. Connect the replicas in a data center with each other This ensures domain replication within the data center. Connect each data center to at least two other data centers This ensures domain replication between data centers. Connect data centers using at least a pair of replication agreements If data centers A and B have a replication agreement from A1 to B1, having a replication agreement from A2 to B2 ensures that if one of the servers is down, the replication can continue between the two data centers. 3.2. Replica topology examples You can create a reliable replica topology by using one of the following examples. Figure 3.1. Replica topology with four data centers, each with four servers that are connected with replication agreements Figure 3.2. Replica topology with three data centers, each with a different number of servers that are all interconnected through replication agreements 3.3. Protecting IdM CA data If your deployment contains the integrated IdM Certificate Authority (CA), install several CA replicas so you can create additional CA replicas if one is lost. Procedure Configure three or more replicas to provide CA services. To install a new replica with CA services, run ipa-replica-install with the --setup-ca option. To install CA services on a preexisting replica, run ipa-ca-install . Create CA replication agreements between your CA replicas. Warning If only one server provides CA services and it is damaged, the entire environment will be lost. If you use the IdM CA, Red Hat strongly recommends having three or more replicas with CA services installed, with CA replication agreements between them. Additional resources Planning your CA services Installing an IdM replica Planning the replica topology
[ "ipa-replica-install --setup-ca", "ipa-ca-install", "ipa topologysegment-add Suffix name: ca Left node: ca-replica1.example.com Right node: ca-replica2.example.com Segment name [ca-replica1.example.com-to-ca-replica2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: ca-replica1.example.com Right node: ca-replica2.example.com Connectivity: both" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/preparing-for-server-loss-with-replication_preparing-for-disaster-recovery
Chapter 135. Timer
Chapter 135. Timer Only consumer is supported The Timer component is used to generate message exchanges when a timer fires You can only consume events from this endpoint. 135.1. Dependencies When using timer with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-timer-starter</artifactId> </dependency> 135.2. URI format Where name is the name of the Timer object, which is created and shared across endpoints. So if you use the same name for all your timer endpoints, only one Timer object and thread will be used. Note The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . Note Advanced Scheduler See also the Quartz component that supports much more advanced scheduling. 135.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 135.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 135.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 135.4. Component Options The Timer component supports 2 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 135.5. Endpoint Options The Timer endpoint is configured using URI syntax: with the following path and query parameters: 135.5.1. Path Parameters (1 parameters) Name Description Default Type timerName (consumer) Required The name of the timer. String 135.5.2. Query Parameters (13 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delay (consumer) Delay before first event is triggered. 1000 long fixedRate (consumer) Events take place at approximately regular intervals, separated by the specified period. false boolean includeMetadata (consumer) Whether to include metadata in the exchange such as fired time, timer name, timer count etc. This information is default included. true boolean period (consumer) If greater than 0, generate periodic events every period. 1000 long repeatCount (consumer) Specifies a maximum limit of number of fires. So if you set it to 1, the timer will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern daemon (advanced) Specifies whether or not the thread associated with the timer endpoint runs as a daemon. The default value is true. true boolean pattern (advanced) Allows you to specify a custom Date pattern to use for setting the time option using URI syntax. String synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean time (advanced) A java.util.Date the first event should be generated. If using the URI, the pattern expected is: yyyy-MM-dd HH:mm:ss or yyyy-MM-dd'T'HH:mm:ss. Date timer (advanced) To use a custom Timer. Timer 135.6. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_TIME Date The value of the time option. Exchange.TIMER_PERIOD long The value of the period option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. Exchange.TIMER_COUNTER Long The current fire counter. Starts from 1. 135.7. Sample To set up a route that generates an event every 60 seconds: from("timer://foo?fixedRate=true&period=60000").to("bean:myBean?method=someMethodName"); The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry. And the route in Spring DSL: <route> <from uri="timer://foo?fixedRate=true&amp;period=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route> 135.8. Firing as soon as possible Since Camel 2.17 You may want to fire messages in a Camel route as soon as possible you can use a negative delay: <route> <from uri="timer://foo?delay=-1"/> <to uri="bean:myBean?method=someMethodName"/> </route> In this way the timer will fire messages immediately. You can also specify a repeatCount parameter in conjunction with a negative delay to stop firing messages after a fixed number has been reached. If you don't specify a repeatCount then the timer will continue firing messages until the route will be stopped. 135.9. Firing only once You may want to fire a message in a Camel route only once, such as when starting the route. To do that you use the repeatCount option as shown: <route> <from uri="timer://foo?repeatCount=1"/> <to uri="bean:myBean?method=someMethodName"/> </route> 135.10. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.timer.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.timer.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.timer.enabled Whether to enable auto configuration of the timer component. This is enabled by default. Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-timer-starter</artifactId> </dependency>", "timer:name[?options]", "timer:timerName", "from(\"timer://foo?fixedRate=true&period=60000\").to(\"bean:myBean?method=someMethodName\");", "<route> <from uri=\"timer://foo?fixedRate=true&amp;period=60000\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>", "<route> <from uri=\"timer://foo?delay=-1\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>", "<route> <from uri=\"timer://foo?repeatCount=1\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-timer-component-starter
6.3 Technical Notes
6.3 Technical Notes Red Hat Enterprise Linux 6 Detailed notes on the changes implemented in Red Hat Enterprise Linux 6.3 Edition 3 Red Hat Engineering Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/index
Preface
Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 7.2 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. Capabilities and limits of Red Hat Enterprise Linux 7 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/pref-release_notes-preface
Chapter 1. Getting support
Chapter 1. Getting support If you experience difficulty with a procedure described in this documentation, or with Red Hat Quay in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your deployment, you can use the Red Hat Quay debugging tool, or check the health endpoint of your deployment to obtain information about your problem. After you have debugged or obtained health information about your deployment, you can search the Red Hat Knowledgebase for a solution or file a support ticket. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue to the ProjectQuay project. Provide specific details, such as the section name and Red Hat Quay version. 1.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. The Red Hat Quay Support Team also maintains a Consolidate troubleshooting article for Red Hat Quay that details solutions to common problems. This is an evolving document that can help users navigate various issues effectively and efficiently. 1.2. Searching the Red Hat Knowledgebase In the event of an Red Hat Quay issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: Red Hat Quay components (such as database ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the Red Hat Quay product filter. Select the Knowledgebase content type filter. 1.3. Submitting a support case Prerequisites You have a Red Hat Customer Portal account. You have a Red Hat standard or premium Subscription. Procedure Log in to the Red Hat Customer Portal and select Open a support case . Select the Troubleshoot tab. For Summary , enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, continue to the following step. For Product , select Red Hat Quay . Select the version of Red Hat Quay that you are using. Click Continue . Optional. Drag and drop, paste, or browse to upload a file. This could be debug logs gathered from your Red Hat Quay deployment. Click Get support to file your ticket.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/troubleshooting_red_hat_quay/getting-support
Chapter 1. Overview of GNOME environments
Chapter 1. Overview of GNOME environments You can switch between several user interfaces and graphics back ends in GNOME. Important To function properly, GNOME requires your system to support 3D acceleration . This includes bare metal systems, as well as hypervisor solutions such as VMWare . If GNOME does not start or performs poorly on your VMWare virtual machine (VM), see Why does the GUI fail to start on my VMware virtual machine? (Red Hat Knowledgebase) 1.1. GNOME environments, back ends, and display protocols In RHEL 9, there are two available GNOME environments: GNOME Standard GNOME Classic Both environments can use two different protocols as their graphical back ends: The Wayland protocol, which uses GNOME Shell as the Wayland compositor and display server. This solution of display server is further referred as GNOME Shell on Wayland . The X11 protocol, which uses X.Org as the display server. The default combination in RHEL 9 is the GNOME Standard environment using GNOME Shell on Wayland as the display server. However, due to certain Wayland limitations, you might want to switch the graphics protocol stack to X11 . You can also switch from GNOME Standard to GNOME Classic. Thus, you can select from the following combinations of back ends and environments when logging in: GNOME Shell on Wayland (the default combination in RHEL 9) GNOME Shell on X11 GNOME Classic on Wayland GNOME Classic on X11 Additional resources For information about how to switch the environments, see Selecting GNOME environment and display protocol . 1.2. GNOME Standard The GNOME Standard user interface includes these major components: Top bar The horizontal bar at the top of the screen provides access to some of the basic functions of GNOME Standard, such as the Activities Overview , clock and calendar, system status icons, and the system menu . System menu The system menu is located in the upper-right corner, and provides the following functionality: Updating settings Controlling the sound volume Accessing your Wi-Fi connection Switching the user Logging out Turning off the computer Activities Overview The Activities Overview features windows and applications views that let you run applications and windows and switch between them. The search entry at the top allows for searching various items available on the desktop, including applications, documents, files, and configuration tools. The horizontal bar on the bottom contains a list of favorite and running applications. You can add or remove applications from the default list of favorites. Message tray The message tray provides access to pending notifications. The message tray shows when you press Super + M . The GNOME Standard desktop 1.3. GNOME Classic GNOME Classic represents a mode for users who prefer a more traditional desktop experience that is similar to the GNOME 2 environment used with RHEL 6. It is based on GNOME 3 technologies, and at the same time it includes multiple features similar to GNOME 2. The GNOME Classic user interface consists of these major components: Applications and Places The Applications menu is displayed at the upper-left corner of the screen. It gives you access to applications organized into categories. If you enable window overview, you can also open the Activities Overview from that menu. The Places menu is displayed to the Applications menu on the top bar. It gives you quick access to important folders, for example Downloads or Pictures . Taskbar The taskbar is displayed at the bottom of the screen, and features: A window list A notification icon displayed to the window list A short identifier for the current workspace and total number of available workspaces displayed to the notification icon Four available workspaces In GNOME Classic, the number of available workspaces is set to 4 by default. Minimize and maximize buttons Window title bars in GNOME Classic feature the minimize and maximize buttons that let you quickly minimize the windows to the window list, or maximize them to take up all of the space on the desktop. A traditional Super + Tab window switcher In GNOME Classic, windows in the Super + Tab window switcher are not grouped by application. System menu The system menu is located in the upper-right corner, and enables the following actions: Updating settings Controlling the sound volume Accessing your Wi-Fi connection Switching the user Logging out Turning off the computer The GNOME Classic desktop with the Favorites submenu of the Applications menu 1.4. Enabling window overview in GNOME Classic In GNOME Classic, the overview of open windows is not available by default. This procedure enables the window overview for all users on the system. Important Enabling the window overview by this procedure is not a permanent change. Each update of the gnome-classic-session package overwrites the configuration file to the default settings, which disable the window overview. To keep the window overview enabled, apply the procedure after each update of gnome-classic-session . Procedure Open the /usr/share/gnome-shell/modes/classic.json file as the root user. Find the following line in the file: Change the line to the following: Save changes, and close the /usr/share/gnome-shell/modes/classic.json file. Restart the user session. Verification In your GNOME Classic session, open multiple windows. Press the Super key to open the window overview. In the overview, check that: The Dash (the horizontal panel on the bottom of the screen) is displayed. The bottom panel is not displayed. Window overview with "hasOverview": true With the default settings ( "hasOverview": false ), the overview has the following features: The Dash is not displayed. The bottom panel is displayed. It includes the Window picker button in its left part and the workspace switcher in its right part. Window overview with "hasOverview": false 1.5. Graphics back ends in RHEL 9 In RHEL 9, you can choose between two protocols to build a graphical user interface: Wayland The Wayland protocol uses GNOME Shell as its compositor and display server, which is further referred to as GNOME Shell on Wayland . X11 The X11 protocol uses X.Org as the display server. Displaying graphics based on this protocol works the same way as in RHEL 7, where this was the only option. New installations of RHEL 9 automatically select GNOME Shell on Wayland . However, you can switch to X.Org , or select the required combination of GNOME environment and display server. X11 applications Client applications need to be ported to the Wayland protocol or use a graphical toolkit that has a Wayland backend, such as GTK, to be able to work natively with the compositor and display server based on Wayland . Legacy X11 applications that cannot be ported to Wayland automatically use Xwayland as a proxy between the X11 legacy clients and the Wayland compositor. Xwayland functions both as an X11 server and a Wayland client. The role of Xwayland is to translate the X11 protocol into the Wayland protocol and reversely, so that X11 legacy applications can work with the display server based on Wayland . On GNOME Shell on Wayland , Xwayland starts automatically at login, which ensures that most X11 legacy applications work as expected when using GNOME Shell on Wayland . However, the X11 and Wayland protocols are different, and certain clients that rely on features specific to X11 might behave differently under Xwayland . For such specific clients, you can switch to the X.Org display server. Input devices RHEL 9 uses a unified input stack, libinput , which manages all common device types, such as mice, touchpads, touchscreens, tablets, trackballs and pointing sticks. This unified stack is used both by the X.Org and by the GNOME Shell on Wayland compositor. GNOME Shell on Wayland uses libinput directly for all devices, and no switchable driver support is available. Under X.Org , libinput is implemented as the X.Org libinput driver, and you can optionally enable the legacy X.Org evdev driver if libinput does not support your input device. Additional resources You can find the current list of environments for which Wayland is not available in the /usr/lib/udev/rules.d/61-gdm.rules file. For additional information about the Wayland project, see Wayland documentation . 1.6. Selecting GNOME environment and display protocol The default desktop environment for RHEL 9 is GNOME Standard with GNOME Shell on Wayland as the display server. However, due to certain limitations of Wayland , you might want to switch the graphics protocol stack. You might also want to switch from GNOME Standard to GNOME Classic. The change of GNOME environment and graphics protocol stack is persistent across user logouts, and also when powering off or rebooting the computer. Procedure From the login screen (GDM), click the gear button in the lower right corner of the screen. Note You cannot access this option from the lock screen. The login screen appears when you first start RHEL or when you log out of your current session. From the drop-down menu that appears, select the option that you prefer. In the menu, the X.Org display server is also marked as X11 . 1.7. Disabling Wayland for all users You can disable the Wayland session for all users on the system, so that they always log in with the X11 session. Procedure Open the /etc/gdm/custom.conf file as the root user. Locate the following line in the [daemon] section of the file: Uncomment the line by remove the # character. As a result, the line says: Reboot the system.
[ "\"hasOverview\": false", "\"hasOverview\": true", "#WaylandEnable=false", "WaylandEnable=false" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/assembly_overview-of-gnome-environments_getting-started-with-the-gnome-desktop-environment
Introduction to the OpenStack Dashboard
Introduction to the OpenStack Dashboard Red Hat OpenStack Platform 17.0 An overview of the Red Hat OpenStack Platform Dashboard graphical user interface OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/introduction_to_the_openstack_dashboard/index
Chapter 3. Create a product
Chapter 3. Create a product The product listing provides marketing and technical information, showcasing your product's features and advantages to potential customers. It lays the foundation for adding all necessary components to your product for certification. Prerequisites Verify the functionality of your product on the target Red Hat platform, in addition to the specific certification testing requirements. If running your product on the targeted Red Hat platform results in a substandard experience then you must resolve the issues before certification. Procedure Red Hat recommends completing all optional fields in the listing tabs for a comprehensive product listing. More information helps mutual customers make informed choices. Red Hat encourages collaboration with your product manager, marketing representative, or other product experts when entering information for your product listing. Fields marked with an asterisk (*) are mandatory. Procedure Log in to the Red Hat Partner Connect Portal . Go to the Certified technology portal tab and click Visit the portal . On the header bar, click Product management . From the Listing and certification tab click Manage products . From the My Products page, click Create Product . A Create New Product dialog opens. Enter the Product name . From the What kind of product would you like to certify? drop-down, select the required product category and click Create product . For example, select OpenStack Infrastructure for creating an OpenStack platform based product listing. A new page with your Product name opens. It comprises the following tabs: Section 3.1, "Overview" Section 3.2, "Product Information" Section 3.3, "Components" Section 3.4, "Support" Along with the following tabs, the page header provides the Product Score details. Product Score evaluates your product information and displays a score. It can be: Fair Good Excellent Best Click How do I improve my score? to improve your product score. After providing the product listing details, click Save before moving to the section. 3.1. Overview This tab consists of a series of tasks that you must complete to publish your product: Section 3.1.1, "Complete product listing details" Section 3.1.2, "Complete company profile information" Section 3.1.3, "Add at least one product component" Section 3.1.4, "Certify components for your listing" 3.1.1. Complete product listing details To complete your product listing details, click Start . The Product Information tab opens. Enter all the essential product details and click Save . 3.1.2. Complete company profile information To complete your company profile information, click Start . After entering all the details, click Submit . To modify the existing details, click Review . The Account Details page opens. Review and modify the Company profile information and click Submit . 3.1.3. Add at least one product component Click Start . You are redirected to the Components tab. To add a new or existing product component, click Add component . For adding a new component, In the Component Name text box, enter the component name. For What kind of standalone component are you creating? select OpenStack Infrastructure for certifying a plugin or driver that uses your own container images on Red Hat OpenStack Platform. Click . Are your product's containers already a part of the Red Hat OpenStack Platform distribution? Your product must use container images provided by Red Hat as part of the RHOSP native distribution. If you have not customized the container images, select Yes . Your container images are already certified, and you need to certify your product only. If you have customized the container images with, for example, additional software, select No . You will need to certify your container images as well as your product. From the Services drop-down menu, select the function of your product: Neutron (Networking) Cinder (Block Storage) Manila (File Storage) Click Add Component . For the Red Hat OpenStack Version , version 17 is enabled by default. For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . 3.1.4. Certify components for your listing To certify the components for your listing, click Start . If you have existing product components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the components Select the components for certification. After completing all the above tasks you will see a green tick mark corresponding to all the options. The Overview tab also provides the following information: Product contacts - Provides Product marketing and Technical contact information. Click Add contacts to product to provide the contact information Click Edit to update the information. Components in product - Provides the list of the components attached to the product along with their last updated information. Click Add components to product to add new or existing components to your product. Click Edit components to update the existing component information. After publishing the product listing, you can view your Product Readiness Score and Ways to raise your score on the Overview tab. 3.2. Product Information Through this tab you can provide all the essential information about your product. The product details are published along with your product on the Red Hat Ecosystem catalog. General tab: Provide basic details of the product, including product name and description. Enter the Product Name . Optional: Upload the Product Logo according to the defined guidelines. Enter a Brief description and a Long description . Click Save . Features & Benefits tab: Provide important features of your product. Optional: Enter the Title and Description . Optional: To add additional features for your product, click + Add new feature . Click Save . Quick start & Config tab: Add links to any quick start guide or configuration document to help customers deploy and start using your product. Optional: Enter Quick start & configuration instructions . Click Save . Select Hide default instructions check box, if you don't want to display them. Linked resources tab: Add links to supporting documentation to help our customers use your product. The information is mapped to and is displayed in the Documentation section on the product's catalog page. Note It is mandatory to add a minimum of three resources. Red Hat encourages you to add more resources, if available. Select the Type drop-down menu, and enter the Title and Description of the resource. Enter the Resource URL . Optional: To add additional resources for your product, click + Add new Resource . Click Save . FAQs tab: Add frequently asked questions and answers of the product's purpose, operation, installation, or other attribute details. You can include common customer queries about your product and services. Enter Question and Answer . Optional: To add additional FAQs for your product, click + Add new FAQ . Click Save . Support tab: This tab lets you provide contact information of your Support team. Enter the Support description , Support web site , Support phone number , and Support email address . Click Save . Contacts tab: Provide contact information of your marketing and technical team. Enter the Marketing contact email address and Technical contact email address . Optional: To add additional contacts, click + Add another . Click Save . Legal tab: Provide the product related license and policy information. Enter the License Agreement URL for the product and Privacy Policy URL . Click Save . SEO tab: Use this tab to improve the discoverability of your product for our mutual customers, enhancing visibility both within the Red Hat Ecosystem Catalog search and on internet search engines. Providing a higher number of search aliases (key and value pairs) will increase the discoverability of your product. Select the Product Category . Enter the Key and Value to set up Search aliases. Click Save . Optional: To add additional key-value pair, click + Add new key-value pair . Note Add at least one Search alias for your product. Red Hat encourages you to add more aliases, if available. 3.3. Components Use this tab to add components to your product listing. Through this tab you can also view a list of attached components linked to your Product Listing. Alternatively, to attach a component to the Product Listing, you can complete the Add at least one product component option available on the Overview tab of a product listing. To add a new or existing product component, click Add component . For adding a new component, in the Component Name text box, enter the component name. For What kind of standalone component are you creating? select OpenStack Infrastructure for certifying a plugin or driver that uses your own container images on Red Hat OpenStack Platform. Click . Are your product's containers already a part of the Red Hat OpenStack Platform distribution? Your product must use container images provided by Red Hat as part of the RHOSP native distribution. If you have not customized the container images, select Yes . Your container images are already certified, and you need to certify your product only. If you have customized the container images with, for example, additional software, select No . You will need to certify your container images as well as your product. From the Services drop-down menu, select the function of your product: Neutron (Networking) Cinder (Block Storage) Manila (File Storage) Click Add Component . For the Red Hat OpenStack Version , version 17 is enabled by default. For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . Note You can add the same component to multiple products listings. All attached components must be published before the product listing can be published. After attaching components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the attached components Alternatively, to search for specific components, type the component's name in the Search by component Name text box. 3.4. Support The Red Hat Partner Acceleration Desk (PAD) is a Products and Technologies level partner help desk service that allows the current and prospective partners a central location to ask non-technical questions pertaining to Red Hat offerings, partner programs, product certification, engagement process, and so on. You can also contact the Red Hat Partner Acceleration Desk for any technical questions you may have regarding the Certification. Technical help requests will be redirected to the Certification Operations team. Through the Partner Subscriptions program, Red Hat offers free, not-for-resale software subscriptions that you can use to validate your product on the target Red Hat platform. To request access to the program, follow the instructions on the Partner Subscriptions site. To request support, click Open a support case. See PAD - How to open & manage PAD cases , to open a PAD ticket. To view the list of existing support cases, click View support cases . 3.5. Removing a product After creating a product listing if you wish to remove it, go to the Overview tab and click Delete . A published product must first be unpublished before it can be deleted. Red Hat retains information related to deleted products even after you delete the product.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/proc_create-a-product-for-openstack-infrastructure_onboarding-certification-partners
22.3. Booleans
22.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: rsync_anon_write Having this Boolean enabled allows rsync in the rsync_t domain to manage files, links and directories that have a type of public_content_rw_t . Often these are public files used for public file transfer services. Files and directories must be labeled this type. rsync_client Having this Boolean enabled allows rsync to initiate connections to ports defined as rsync_port_t , as well as allowing the daemon to manage files, links, and directories that have a type of rsync_data_t . Note that rsync must be in the rsync_t domain in order for SELinux to enact its control over it. The configuration example in this chapter demonstrates rsync running in the rsync_t domain. rsync_export_all_ro Having this Boolean enabled allows rsync in the rsync_t domain to export NFS and CIFS volumes with read-only access to clients. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-rsync-booleans
Chapter 14. Pod [v1]
Chapter 14. Pod [v1] Description Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. Type object 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. status object PodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane. 14.1.1. .spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 14.1.2. .spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 14.1.3. .spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 14.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 14.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 14.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 14.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 14.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 14.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 14.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.18. .spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.22. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.23. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.24. .spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.25. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.26. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.27. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.28. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.29. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.30. .spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 14.1.31. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.32. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.33. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.34. .spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.35. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.36. .spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.37. .spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.38. .spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.39. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.40. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.41. .spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.42. .spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.43. .spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.44. .spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.45. .spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.46. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.47. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.48. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.49. .spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.50. .spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.51. .spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.52. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.53. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.54. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.55. .spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.56. .spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.57. .spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.58. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.59. .spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.60. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.61. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.62. .spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.63. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 14.1.64. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.65. .spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.66. .spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.67. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.68. .spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.69. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.70. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.71. .spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.72. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.73. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.74. .spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.75. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.76. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.77. .spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.78. .spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.79. .spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.80. .spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.81. .spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.82. .spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.83. .spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.84. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.85. .spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.86. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.87. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.88. .spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.89. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.90. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.91. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 14.1.92. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.93. .spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 14.1.94. .spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 14.1.95. .spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 14.1.96. .spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 14.1.97. .spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.98. .spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.99. .spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.100. .spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.101. .spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.102. .spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.103. .spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.104. .spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.105. .spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.106. .spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.107. .spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.108. .spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.109. .spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.110. .spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.111. .spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.112. .spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.113. .spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.114. .spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.115. .spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.116. .spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.117. .spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.118. .spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.119. .spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.120. .spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.121. .spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.122. .spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.123. .spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.124. .spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.125. .spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.126. .spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.127. .spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.128. .spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.129. .spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 14.1.130. .spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.131. .spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.132. .spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.133. .spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.134. .spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.135. .spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.136. .spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.137. .spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.138. .spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.139. .spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.140. .spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.141. .spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.142. .spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.143. .spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.144. .spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.145. .spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.146. .spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.147. .spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.148. .spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.149. .spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.150. .spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.151. .spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.152. .spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.153. .spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.154. .spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.155. .spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.156. .spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.157. .spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 14.1.158. .spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.159. .spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 14.1.160. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 14.1.161. .spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 14.1.162. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.163. .spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 14.1.164. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 14.1.165. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 14.1.166. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 14.1.167. .spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 14.1.168. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 14.1.169. .spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.170. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.171. .spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 14.1.172. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 14.1.173. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 14.1.174. .spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 14.1.175. .spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 14.1.176. .spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 14.1.177. .spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.178. .spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.179. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.180. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.181. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.182. .spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.183. .spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 14.1.184. .spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.185. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.186. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.187. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.188. .spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.189. .spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.190. .spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.191. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.192. .spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.193. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.194. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.195. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.196. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 14.1.197. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 14.1.198. .spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.199. .spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.200. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.201. .spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.202. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.203. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.204. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.205. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 14.1.206. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 14.1.207. .spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.208. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.209. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.210. .spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.211. .spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 14.1.212. .spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.213. .spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.214. .spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.215. .spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 14.1.216. .spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 14.1.217. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 14.1.218. .spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 14.1.219. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 14.1.220. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 14.1.221. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 14.1.222. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 14.1.223. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 14.1.224. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 14.1.225. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 14.1.226. .spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 14.1.227. .spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 14.1.228. .spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 14.1.229. .spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 14.1.230. .spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. 14.1.231. .spec.resourceClaims[].source Description ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 14.1.232. .spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 14.1.233. .spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 14.1.234. .spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Always" indicates that volume's ownership and permissions should always be changed whenever volume is mounted inside a Pod. This the default behavior. - "OnRootMismatch" indicates that volume's ownership and permissions will be changed only when permission and ownership of root directory does not match with expected permissions on the volume. This can help shorten the time it takes to change ownership and permissions of a volume. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 14.1.235. .spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 14.1.236. .spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 14.1.237. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 14.1.238. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 14.1.239. .spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 14.1.240. .spec.tolerations Description If specified, the pod's tolerations. Type array 14.1.241. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 14.1.242. .spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 14.1.243. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 14.1.244. .spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 14.1.245. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 14.1.246. .spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 14.1.247. .spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 14.1.248. .spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 14.1.249. .spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 14.1.250. .spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.251. .spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 14.1.252. .spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.253. .spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 14.1.254. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.255. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.256. .spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 14.1.257. .spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.258. .spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 14.1.259. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 14.1.260. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 14.1.261. .spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.262. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.263. .spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 14.1.264. .spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 14.1.265. .spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 14.1.266. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 14.1.267. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 14.1.268. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 14.1.269. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.270. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.271. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.272. .spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 14.1.273. .spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 14.1.274. .spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.275. .spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 14.1.276. .spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 14.1.277. .spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 14.1.278. .spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 14.1.279. .spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 14.1.280. .spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 14.1.281. .spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.282. .spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 14.1.283. .spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 14.1.284. .spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 14.1.285. .spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 14.1.286. .spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 14.1.287. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 14.1.288. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 14.1.289. .spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 14.1.290. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.291. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.292. .spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 14.1.293. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 14.1.294. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 14.1.295. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 14.1.296. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 14.1.297. .spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 14.1.298. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.299. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.300. .spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 14.1.301. .spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 14.1.302. .spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 14.1.303. .spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.304. .spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 14.1.305. .spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.306. .spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 14.1.307. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 14.1.308. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 14.1.309. .spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 14.1.310. .spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 14.1.311. .spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 14.1.312. .status Description PodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane. Type object Property Type Description conditions array Current service state of pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions conditions[] object PodCondition contains details for the current condition of this pod. containerStatuses array The list has one entry per container in the manifest. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status containerStatuses[] object ContainerStatus contains details for the current status of this container. ephemeralContainerStatuses array Status for any ephemeral containers that have run in this pod. ephemeralContainerStatuses[] object ContainerStatus contains details for the current status of this container. hostIP string hostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will not be updated even if there is a node is assigned to pod hostIPs array hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must match the hostIP field. This list is empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will not be updated even if there is a node is assigned to this pod. hostIPs[] object HostIP represents a single IP address allocated to the host. initContainerStatuses array The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status initContainerStatuses[] object ContainerStatus contains details for the current status of this container. message string A human readable message indicating details about why the pod is in this condition. nominatedNodeName string nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled. phase string The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values: Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase Possible enum values: - "Failed" means that all containers in the pod have terminated, and at least one container has terminated in a failure (exited with a non-zero exit code or was stopped by the system). - "Pending" means the pod has been accepted by the system, but one or more of the containers has not been started. This includes time before being bound to a node, as well as time spent pulling images onto the host. - "Running" means the pod has been bound to a node and all of the containers have been started. At least one container is still running or is in the process of being restarted. - "Succeeded" means that all containers in the pod have voluntarily terminated with a container exit code of 0, and the system is not going to restart any of these containers. - "Unknown" means that for some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod. Deprecated: It isn't being set since 2015 (74da3b14b0c0f658b3bb8d2def5094686d0e9095) podIP string podIP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated. podIPs array podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet. podIPs[] object PodIP represents a single IP address allocated to the pod. qosClass string The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes Possible enum values: - "BestEffort" is the BestEffort qos class. - "Burstable" is the Burstable qos class. - "Guaranteed" is the Guaranteed qos class. reason string A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted' resize string Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to "Proposed" resourceClaimStatuses array Status of resource claims. resourceClaimStatuses[] object PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim which references a ResourceClaimTemplate. It stores the generated name for the corresponding ResourceClaim. startTime Time RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod. 14.1.313. .status.conditions Description Current service state of pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions Type array 14.1.314. .status.conditions[] Description PodCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions type string Type is the type of the condition. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions 14.1.315. .status.containerStatuses Description The list has one entry per container in the manifest. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status Type array 14.1.316. .status.containerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.317. .status.containerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.318. .status.containerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.319. .status.containerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.320. .status.containerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.321. .status.containerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.322. .status.containerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.323. .status.containerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.324. .status.containerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.325. .status.containerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.326. .status.containerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.327. .status.containerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.328. .status.ephemeralContainerStatuses Description Status for any ephemeral containers that have run in this pod. Type array 14.1.329. .status.ephemeralContainerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.330. .status.ephemeralContainerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.331. .status.ephemeralContainerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.332. .status.ephemeralContainerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.333. .status.ephemeralContainerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.334. .status.ephemeralContainerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.335. .status.ephemeralContainerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.336. .status.ephemeralContainerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.337. .status.ephemeralContainerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.338. .status.ephemeralContainerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.339. .status.ephemeralContainerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.340. .status.ephemeralContainerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.341. .status.hostIPs Description hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must match the hostIP field. This list is empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will not be updated even if there is a node is assigned to this pod. Type array 14.1.342. .status.hostIPs[] Description HostIP represents a single IP address allocated to the host. Type object Property Type Description ip string IP is the IP address assigned to the host 14.1.343. .status.initContainerStatuses Description The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status Type array 14.1.344. .status.initContainerStatuses[] Description ContainerStatus contains details for the current status of this container. Type object Required name ready restartCount image imageID Property Type Description allocatedResources object (Quantity) AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize. containerID string ContainerID is the ID of the container in the format '<type>://<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example "containerd"). image string Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https://kubernetes.io/docs/concepts/containers/images . imageID string ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime. lastState object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. name string Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated. ready boolean Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field). The value is typically used to determine whether a container is ready to accept traffic. resources object ResourceRequirements describes the compute resource requirements. restartCount integer RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative. started boolean Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false. state object ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. 14.1.345. .status.initContainerStatuses[].lastState Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.346. .status.initContainerStatuses[].lastState.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.347. .status.initContainerStatuses[].lastState.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.348. .status.initContainerStatuses[].lastState.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.349. .status.initContainerStatuses[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.350. .status.initContainerStatuses[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 14.1.351. .status.initContainerStatuses[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.352. .status.initContainerStatuses[].state Description ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting. Type object Property Type Description running object ContainerStateRunning is a running state of a container. terminated object ContainerStateTerminated is a terminated state of a container. waiting object ContainerStateWaiting is a waiting state of a container. 14.1.353. .status.initContainerStatuses[].state.running Description ContainerStateRunning is a running state of a container. Type object Property Type Description startedAt Time Time at which the container was last (re-)started 14.1.354. .status.initContainerStatuses[].state.terminated Description ContainerStateTerminated is a terminated state of a container. Type object Required exitCode Property Type Description containerID string Container's ID in the format '<type>://<container_id>' exitCode integer Exit status from the last termination of the container finishedAt Time Time at which the container last terminated message string Message regarding the last termination of the container reason string (brief) reason from the last termination of the container signal integer Signal from the last termination of the container startedAt Time Time at which execution of the container started 14.1.355. .status.initContainerStatuses[].state.waiting Description ContainerStateWaiting is a waiting state of a container. Type object Property Type Description message string Message regarding why the container is not yet running. reason string (brief) reason the container is not yet running. 14.1.356. .status.podIPs Description podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet. Type array 14.1.357. .status.podIPs[] Description PodIP represents a single IP address allocated to the pod. Type object Property Type Description ip string IP is the IP address assigned to the pod 14.1.358. .status.resourceClaimStatuses Description Status of resource claims. Type array 14.1.359. .status.resourceClaimStatuses[] Description PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim which references a ResourceClaimTemplate. It stores the generated name for the corresponding ResourceClaim. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must match the name of an entry in pod.spec.resourceClaims, which implies that the string must be a DNS_LABEL. resourceClaimName string ResourceClaimName is the name of the ResourceClaim that was generated for the Pod in the namespace of the Pod. It this is unset, then generating a ResourceClaim was not necessary. The pod.spec.resourceClaims entry can be ignored in this case. 14.2. API endpoints The following API endpoints are available: /api/v1/pods GET : list or watch objects of kind Pod /api/v1/watch/pods GET : watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/pods DELETE : delete collection of Pod GET : list or watch objects of kind Pod POST : create a Pod /api/v1/watch/namespaces/{namespace}/pods GET : watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/pods/{name} DELETE : delete a Pod GET : read the specified Pod PATCH : partially update the specified Pod PUT : replace the specified Pod /api/v1/namespaces/{namespace}/pods/{name}/log GET : read log of the specified Pod /api/v1/watch/namespaces/{namespace}/pods/{name} GET : watch changes to an object of kind Pod. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/pods/{name}/status GET : read status of the specified Pod PATCH : partially update status of the specified Pod PUT : replace status of the specified Pod /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers GET : read ephemeralcontainers of the specified Pod PATCH : partially update ephemeralcontainers of the specified Pod PUT : replace ephemeralcontainers of the specified Pod 14.2.1. /api/v1/pods HTTP method GET Description list or watch objects of kind Pod Table 14.1. HTTP responses HTTP code Reponse body 200 - OK PodList schema 401 - Unauthorized Empty 14.2.2. /api/v1/watch/pods HTTP method GET Description watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. Table 14.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.3. /api/v1/namespaces/{namespace}/pods HTTP method DELETE Description delete collection of Pod Table 14.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Pod Table 14.5. HTTP responses HTTP code Reponse body 200 - OK PodList schema 401 - Unauthorized Empty HTTP method POST Description create a Pod Table 14.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.7. Body parameters Parameter Type Description body Pod schema Table 14.8. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 202 - Accepted Pod schema 401 - Unauthorized Empty 14.2.4. /api/v1/watch/namespaces/{namespace}/pods HTTP method GET Description watch individual changes to a list of Pod. deprecated: use the 'watch' parameter with a list operation instead. Table 14.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.5. /api/v1/namespaces/{namespace}/pods/{name} Table 14.10. Global path parameters Parameter Type Description name string name of the Pod HTTP method DELETE Description delete a Pod Table 14.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.12. HTTP responses HTTP code Reponse body 200 - OK Pod schema 202 - Accepted Pod schema 401 - Unauthorized Empty HTTP method GET Description read the specified Pod Table 14.13. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Pod Table 14.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.15. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Pod Table 14.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.17. Body parameters Parameter Type Description body Pod schema Table 14.18. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty 14.2.6. /api/v1/namespaces/{namespace}/pods/{name}/log Table 14.19. Global path parameters Parameter Type Description name string name of the Pod HTTP method GET Description read log of the specified Pod Table 14.20. HTTP responses HTTP code Reponse body 200 - OK string 401 - Unauthorized Empty 14.2.7. /api/v1/watch/namespaces/{namespace}/pods/{name} Table 14.21. Global path parameters Parameter Type Description name string name of the Pod HTTP method GET Description watch changes to an object of kind Pod. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 14.22. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.8. /api/v1/namespaces/{namespace}/pods/{name}/status Table 14.23. Global path parameters Parameter Type Description name string name of the Pod HTTP method GET Description read status of the specified Pod Table 14.24. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Pod Table 14.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.26. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Pod Table 14.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.28. Body parameters Parameter Type Description body Pod schema Table 14.29. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty 14.2.9. /api/v1/namespaces/{namespace}/pods/{name}/ephemeralcontainers Table 14.30. Global path parameters Parameter Type Description name string name of the Pod HTTP method GET Description read ephemeralcontainers of the specified Pod Table 14.31. HTTP responses HTTP code Reponse body 200 - OK Pod schema 401 - Unauthorized Empty HTTP method PATCH Description partially update ephemeralcontainers of the specified Pod Table 14.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.33. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty HTTP method PUT Description replace ephemeralcontainers of the specified Pod Table 14.34. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.35. Body parameters Parameter Type Description body Pod schema Table 14.36. HTTP responses HTTP code Reponse body 200 - OK Pod schema 201 - Created Pod schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/pod-v1
Chapter 6. Configuration from sample environment file
Chapter 6. Configuration from sample environment file The environment file that you created in Creating the custom back end environment file configures the Block Storage service to use two NetApp back ends. The following snippet displays the relevant settings:
[ "enabled_backends = netapp1,netapp2 [netapp1] volume_backend_name=netapp_1 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@USDUSDw0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11 [netapp2] volume_backend_name=netapp_2 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@USDUSDw0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/ref_configuration-sample-environment-file_custom-cinder-back-end
Chapter 9. The OptaPlanner SolverManager
Chapter 9. The OptaPlanner SolverManager A SolverManager is a facade for one or more Solver instances to simplify solving planning problems in REST and other enterprise services. Unlike the Solver.solve(... ) method, a SolverManager has the following characteristics: SolverManager.solve(... ) returns immediately: it schedules a problem for asynchronous solving without blocking the calling thread. This avoids timeout issues of HTTP and other technologies. SolverManager.solve(... ) solves multiple planning problems of the same domain, in parallel. Internally, a SolverManager manages a thread pool of solver threads, which call Solver.solve(... ) , and a thread pool of consumer threads, which handle best solution changed events. In Quarkus and Spring Boot, the SolverManager instance is automatically injected in your code. If you are using a platform other than Quarkus or Spring Boot, build a SolverManager instance with the create(... ) method: SolverConfig solverConfig = SolverConfig.createFromXmlResource(".../cloudBalancingSolverConfig.xml"); SolverManager<CloudBalance, UUID> solverManager = SolverManager.create(solverConfig, new SolverManagerConfig()); Each problem submitted to the SolverManager.solve(... ) methods must have a unique problem ID. Later calls to getSolverStatus(problemId) or terminateEarly(problemId) use that problem ID to distinguish between planning problems. The problem ID must be an immutable class, such as Long , String , or java.util.UUID . The SolverManagerConfig class has a parallelSolverCount property that controls how many solvers are run in parallel. For example, if the parallelSolverCount property` is set to 4 and you submit five problems, four problems start solving immediately and the fifth problem starts when one of the first problems ends. If those problems solve for five minutes each, the fifth problem takes 10 minutes to finish. By default, parallelSolverCount is set to AUTO , which resolves to half the CPU cores, regardless of the moveThreadCount of the solvers. To retrieve the best solution, after solving terminates normally use SolverJob.getFinalBestSolution() : CloudBalance problem1 = ...; UUID problemId = UUID.randomUUID(); // Returns immediately SolverJob<CloudBalance, UUID> solverJob = solverManager.solve(problemId, problem1); ... CloudBalance solution1; try { // Returns only after solving terminates solution1 = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw ...; } However, there are better approaches, both for solving batch problems before a user needs the solution as well as for live solving while a user is actively waiting for the solution. The current SolverManager implementation runs on a single computer node, but future work aims to distribute solver loads across a cloud. 9.1. Batch solving problems Batch solving is solving multiple data sets in parallel. Batch solving is particularly useful overnight: There are typically few or no problem changes in the middle of the night. Some organizations enforce a deadline, for example, submit all day off requests before midnight . The solvers can run for much longer, often hours, because nobody is waiting for the results and CPU resources are often cheaper. Solutions are available when employees arrive at work the working day. Procedure To batch solve problems in parallel, limited by parallelSolverCount , call solve(... ) for each data set created the following class: public class TimeTableService { private SolverManager<TimeTable, Long> solverManager; // Returns immediately, call it for every data set public void solveBatch(Long timeTableId) { solverManager.solve(timeTableId, // Called once, when solving starts this::findById, // Called once, when solving ends this::save); } public TimeTable findById(Long timeTableId) {...} public void save(TimeTable timeTable) {...} } 9.2. Solve and listen to show progress When a solver is running while a user is waiting for a solution, the user might need to wait for several minutes or hours before receiving a result. To assure the user that everything is going well, show progress by displaying the best solution and best score attained so far. Procedure To handle intermediate best solutions, use solveAndListen(... ) : public class TimeTableService { private SolverManager<TimeTable, Long> solverManager; // Returns immediately public void solveLive(Long timeTableId) { solverManager.solveAndListen(timeTableId, // Called once, when solving starts this::findById, // Called multiple times, for every best solution change this::save); } public TimeTable findById(Long timeTableId) {...} public void save(TimeTable timeTable) {...} public void stopSolving(Long timeTableId) { solverManager.terminateEarly(timeTableId); } } This implementation is using the database to communicate with the UI, which polls the database. More advanced implementations push the best solutions directly to the UI or a messaging queue. When the user is satisfied with the intermediate best solution and does not want to wait any longer for a better one, call SolverManager.terminateEarly(problemId) .
[ "SolverConfig solverConfig = SolverConfig.createFromXmlResource(\".../cloudBalancingSolverConfig.xml\"); SolverManager<CloudBalance, UUID> solverManager = SolverManager.create(solverConfig, new SolverManagerConfig());", "CloudBalance problem1 = ...; UUID problemId = UUID.randomUUID(); // Returns immediately SolverJob<CloudBalance, UUID> solverJob = solverManager.solve(problemId, problem1); CloudBalance solution1; try { // Returns only after solving terminates solution1 = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw ...; }", "public class TimeTableService { private SolverManager<TimeTable, Long> solverManager; // Returns immediately, call it for every data set public void solveBatch(Long timeTableId) { solverManager.solve(timeTableId, // Called once, when solving starts this::findById, // Called once, when solving ends this::save); } public TimeTable findById(Long timeTableId) {...} public void save(TimeTable timeTable) {...} }", "public class TimeTableService { private SolverManager<TimeTable, Long> solverManager; // Returns immediately public void solveLive(Long timeTableId) { solverManager.solveAndListen(timeTableId, // Called once, when solving starts this::findById, // Called multiple times, for every best solution change this::save); } public TimeTable findById(Long timeTableId) {...} public void save(TimeTable timeTable) {...} public void stopSolving(Long timeTableId) { solverManager.terminateEarly(timeTableId); } }" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/sovlermanager-con_developing-solvers
Chapter 15. Infrastructure [config.openshift.io/v1]
Chapter 15. Infrastructure [config.openshift.io/v1] Description Infrastructure holds cluster-wide information about Infrastructure. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 15.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cloudConfig object cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. platformSpec object platformSpec holds desired information specific to the underlying infrastructure provider. 15.1.2. .spec.cloudConfig Description cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 15.1.3. .spec.platformSpec Description platformSpec holds desired information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "KubeVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.4. .spec.platformSpec.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object 15.1.5. .spec.platformSpec.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.6. .spec.platformSpec.aws.serviceEndpoints Description serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.7. .spec.platformSpec.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.8. .spec.platformSpec.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object 15.1.9. .spec.platformSpec.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". 15.1.10. .spec.platformSpec.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object 15.1.11. .spec.platformSpec.external Description ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. Type object Property Type Description platformName string PlatformName holds the arbitrary string representing the infrastructure provider name, expected to be set at the installation time. This field is solely for informational and reporting purposes and is not expected to be used for decision-making. 15.1.12. .spec.platformSpec.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object 15.1.13. .spec.platformSpec.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object 15.1.14. .spec.platformSpec.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object 15.1.15. .spec.platformSpec.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Required prismCentral prismElements Property Type Description failureDomains array failureDomains configures failure domains information for the Nutanix platform. When set, the failure domains defined here may be used to spread Machines across prism element clusters to improve fault tolerance of the cluster. failureDomains[] object NutanixFailureDomain configures failure domain information for the Nutanix platform. prismCentral object prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. prismElements array prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. prismElements[] object NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) 15.1.16. .spec.platformSpec.nutanix.failureDomains Description failureDomains configures failure domains information for the Nutanix platform. When set, the failure domains defined here may be used to spread Machines across prism element clusters to improve fault tolerance of the cluster. Type array 15.1.17. .spec.platformSpec.nutanix.failureDomains[] Description NutanixFailureDomain configures failure domain information for the Nutanix platform. Type object Required cluster name subnets Property Type Description cluster object cluster is to identify the cluster (the Prism Element under management of the Prism Central), in which the Machine's VM will be created. The cluster identifier (uuid or name) can be obtained from the Prism Central console or using the prism_central API. name string name defines the unique name of a failure domain. Name is required and must be at most 64 characters in length. It must consist of only lower case alphanumeric characters and hyphens (-). It must start and end with an alphanumeric character. This value is arbitrary and is used to identify the failure domain within the platform. subnets array subnets holds a list of identifiers (one or more) of the cluster's network subnets If the feature gate NutanixMultiSubnets is enabled, up to 32 subnets may be configured. for the Machine's VM to connect to. The subnet identifiers (uuid or name) can be obtained from the Prism Central console or using the prism_central API. subnets[] object NutanixResourceIdentifier holds the identity of a Nutanix PC resource (cluster, image, subnet, etc.) 15.1.18. .spec.platformSpec.nutanix.failureDomains[].cluster Description cluster is to identify the cluster (the Prism Element under management of the Prism Central), in which the Machine's VM will be created. The cluster identifier (uuid or name) can be obtained from the Prism Central console or using the prism_central API. Type object Required type Property Type Description name string name is the resource name in the PC. It cannot be empty if the type is Name. type string type is the identifier type to use for this resource. uuid string uuid is the UUID of the resource in the PC. It cannot be empty if the type is UUID. 15.1.19. .spec.platformSpec.nutanix.failureDomains[].subnets Description subnets holds a list of identifiers (one or more) of the cluster's network subnets If the feature gate NutanixMultiSubnets is enabled, up to 32 subnets may be configured. for the Machine's VM to connect to. The subnet identifiers (uuid or name) can be obtained from the Prism Central console or using the prism_central API. Type array 15.1.20. .spec.platformSpec.nutanix.failureDomains[].subnets[] Description NutanixResourceIdentifier holds the identity of a Nutanix PC resource (cluster, image, subnet, etc.) Type object Required type Property Type Description name string name is the resource name in the PC. It cannot be empty if the type is Name. type string type is the identifier type to use for this resource. uuid string uuid is the UUID of the resource in the PC. It cannot be empty if the type is UUID. 15.1.21. .spec.platformSpec.nutanix.prismCentral Description prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.22. .spec.platformSpec.nutanix.prismElements Description prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. Type array 15.1.23. .spec.platformSpec.nutanix.prismElements[] Description NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) Type object Required endpoint name Property Type Description endpoint object endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. name string name is the name of the Prism Element (cluster). This value will correspond with the cluster field configured on other resources (eg Machines, PVCs, etc). 15.1.24. .spec.platformSpec.nutanix.prismElements[].endpoint Description endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.25. .spec.platformSpec.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". 15.1.26. .spec.platformSpec.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object 15.1.27. .spec.platformSpec.powervs Description PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. 15.1.28. .spec.platformSpec.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.29. .spec.platformSpec.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.30. .spec.platformSpec.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.apiServerInternalIPs will be used. Once set, the list cannot be completely removed (but its second entry can). failureDomains array failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. failureDomains[] object VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IP addresses, one from IPv4 family and one from IPv6. In single stack clusters a single IP address is expected. When omitted, values from the status.ingressIPs will be used. Once set, the list cannot be completely removed (but its second entry can). machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. Each network is provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". nodeNetworking object nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. vcenters array vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported, but in tech preview 3 vCenters are supported. Once the cluster has been installed, you are unable to change the current number of defined vCenters except in the case where the cluster has been upgraded from a version of OpenShift where the vsphere platform spec was not present. You may make modifications to the existing vCenters that are defined in the vcenters list in order to match with any added or modified failure domains. vcenters[] object VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. 15.1.31. .spec.platformSpec.vsphere.failureDomains Description failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. Type array 15.1.32. .spec.platformSpec.vsphere.failureDomains[] Description VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. Type object Required name region server topology zone Property Type Description name string name defines the arbitrary but unique name of a failure domain. region string region defines the name of a region tag that will be attached to a vCenter datacenter. The tag category in vCenter must be named openshift-region. server string server is the fully-qualified domain name or the IP address of the vCenter server. topology object Topology describes a given failure domain using vSphere constructs zone string zone defines the name of a zone tag that will be attached to a vCenter cluster. The tag category in vCenter must be named openshift-zone. 15.1.33. .spec.platformSpec.vsphere.failureDomains[].topology Description Topology describes a given failure domain using vSphere constructs Type object Required computeCluster datacenter datastore networks Property Type Description computeCluster string computeCluster the absolute path of the vCenter cluster in which virtual machine will be located. The absolute path is of the form /<datacenter>/host/<cluster>. The maximum length of the path is 2048 characters. datacenter string datacenter is the name of vCenter datacenter in which virtual machines will be located. The maximum length of the datacenter name is 80 characters. datastore string datastore is the absolute path of the datastore in which the virtual machine is located. The absolute path is of the form /<datacenter>/datastore/<datastore> The maximum length of the path is 2048 characters. folder string folder is the absolute path of the folder where virtual machines are located. The absolute path is of the form /<datacenter>/vm/<folder>. The maximum length of the path is 2048 characters. networks array (string) networks is the list of port group network names within this failure domain. If feature gate VSphereMultiNetworks is enabled, up to 10 network adapters may be defined. 10 is the maximum number of virtual network devices which may be attached to a VM as defined by: https://configmax.esp.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%208.0&categories=1-0 The available networks (port groups) can be listed using govc ls 'network/*' Networks should be in the form of an absolute path: /<datacenter>/network/<portgroup>. resourcePool string resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>. The maximum length of the path is 2048 characters. template string template is the full inventory path of the virtual machine or template that will be cloned when creating new machines in this failure domain. The maximum length of the path is 2048 characters. When omitted, the template will be calculated by the control plane machineset operator based on the region and zone defined in VSpherePlatformFailureDomainSpec. For example, for zone=zonea, region=region1, and infrastructure name=test, the template path would be calculated as /<datacenter>/vm/test-rhcos-region1-zonea. 15.1.34. .spec.platformSpec.vsphere.nodeNetworking Description nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. Type object Property Type Description external object external represents the network configuration of the node that is externally routable. internal object internal represents the network configuration of the node that is routable only within the cluster. 15.1.35. .spec.platformSpec.vsphere.nodeNetworking.external Description external represents the network configuration of the node that is externally routable. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. 15.1.36. .spec.platformSpec.vsphere.nodeNetworking.internal Description internal represents the network configuration of the node that is routable only within the cluster. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. 15.1.37. .spec.platformSpec.vsphere.vcenters Description vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported, but in tech preview 3 vCenters are supported. Once the cluster has been installed, you are unable to change the current number of defined vCenters except in the case where the cluster has been upgraded from a version of OpenShift where the vsphere platform spec was not present. You may make modifications to the existing vCenters that are defined in the vcenters list in order to match with any added or modified failure domains. Type array 15.1.38. .spec.platformSpec.vsphere.vcenters[] Description VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. Type object Required datacenters server Property Type Description datacenters array (string) The vCenter Datacenters in which the RHCOS vm guests are located. This field will be used by the Cloud Controller Manager. Each datacenter listed here should be used within a topology. port integer port is the TCP port that will be used to communicate to the vCenter endpoint. When omitted, this means the user has no opinion and it is up to the platform to choose a sensible default, which is subject to change over time. server string server is the fully-qualified domain name or the IP address of the vCenter server. 15.1.39. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description apiServerInternalURI string apiServerInternalURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerInternalURL can be used by components like kubelets, to contact the Kubernetes API server using the infrastructure provider rather than Kubernetes networking. apiServerURL string apiServerURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerURL can be used by components like the web console to tell users where to find the Kubernetes API. controlPlaneTopology string controlPlaneTopology expresses the expectations for operands that normally run on control nodes. The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation The 'External' mode indicates that the control plane is hosted externally to the cluster and that its components are not visible within the cluster. cpuPartitioning string cpuPartitioning expresses if CPU partitioning is a currently enabled feature in the cluster. CPU Partitioning means that this cluster can support partitioning workloads to specific CPU Sets. Valid values are "None" and "AllNodes". When omitted, the default value is "None". The default value of "None" indicates that no nodes will be setup with CPU partitioning. The "AllNodes" value indicates that all nodes have been setup with CPU partitioning, and can then be further configured via the PerformanceProfile API. etcdDiscoveryDomain string etcdDiscoveryDomain is the domain used to fetch the SRV records for discovering etcd servers and clients. For more info: https://github.com/etcd-io/etcd/blob/329be66e8b3f9e2e6af83c123ff89297e49ebd15/Documentation/op-guide/clustering.md#dns-discovery deprecated: as of 4.7, this field is no longer set or honored. It will be removed in a future release. infrastructureName string infrastructureName uniquely identifies a cluster with a human friendly name. Once set it should not be changed. Must be of max length 27 and must have only alphanumeric or hyphen characters. infrastructureTopology string infrastructureTopology expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master . The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation NOTE: External topology mode is not applicable for this field. platform string platform is the underlying infrastructure provider for the cluster. Deprecated: Use platformStatus.type instead. platformStatus object platformStatus holds status information specific to the underlying infrastructure provider. 15.1.40. .status.platformStatus Description platformStatus holds status information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object External contains settings specific to the generic External infrastructure provider. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. This value will be synced with to the status.platform and status.platformStatus.type . Currently this value cannot be changed once set. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.41. .status.platformStatus.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object Required region Property Type Description region string region specifies the region for Alibaba Cloud resources created for the cluster. resourceGroupID string resourceGroupID is the ID of the resource group for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. resourceTags[] object AlibabaCloudResourceTag is the set of tags to add to apply to resources. 15.1.42. .status.platformStatus.alibabaCloud.resourceTags Description resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. Type array 15.1.43. .status.platformStatus.alibabaCloud.resourceTags[] Description AlibabaCloudResourceTag is the set of tags to add to apply to resources. Type object Required key value Property Type Description key string key is the key of the tag. value string value is the value of the tag. 15.1.44. .status.platformStatus.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description region string region holds the default AWS region for new AWS resources created by the cluster. resourceTags array resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. resourceTags[] object AWSResourceTag is a tag to apply to AWS resources created for the cluster. serviceEndpoints array ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.45. .status.platformStatus.aws.resourceTags Description resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. Type array 15.1.46. .status.platformStatus.aws.resourceTags[] Description AWSResourceTag is a tag to apply to AWS resources created for the cluster. Type object Required key value Property Type Description key string key is the key of the tag value string value is the value of the tag. Some AWS service do not support empty values. Since tags are added to resources in many services, the length of the tag value must meet the requirements of all services. 15.1.47. .status.platformStatus.aws.serviceEndpoints Description ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.48. .status.platformStatus.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.49. .status.platformStatus.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object Property Type Description armEndpoint string armEndpoint specifies a URL to use for resource management in non-soverign clouds such as Azure Stack. cloudName string cloudName is the name of the Azure cloud environment which can be used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the value is equal to AzurePublicCloud . networkResourceGroupName string networkResourceGroupName is the Resource Group for network resources like the Virtual Network and Subnets used by the cluster. If empty, the value is same as ResourceGroupName. resourceGroupName string resourceGroupName is the Resource Group for new Azure resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. resourceTags[] object AzureResourceTag is a tag to apply to Azure resources created for the cluster. 15.1.50. .status.platformStatus.azure.resourceTags Description resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. Type array 15.1.51. .status.platformStatus.azure.resourceTags[] Description AzureResourceTag is a tag to apply to Azure resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 128 characters and cannot be empty. Key must begin with a letter, end with a letter, number or underscore, and must contain only alphanumeric characters and the following special characters _ . - . value string value is the value part of the tag. A tag value can have a maximum of 256 characters and cannot be empty. Value must contain only alphanumeric characters and the following special characters _ + , - . / : ; < = > ? @ . 15.1.52. .status.platformStatus.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for BareMetal deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.53. .status.platformStatus.baremetal.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on BareMetal platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.54. .status.platformStatus.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.55. .status.platformStatus.external Description External contains settings specific to the generic External infrastructure provider. Type object Property Type Description cloudControllerManager object cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.56. .status.platformStatus.external.cloudControllerManager Description cloudControllerManager contains settings specific to the external Cloud Controller Manager (a.k.a. CCM or CPI). When omitted, new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. Type object Property Type Description state string state determines whether or not an external Cloud Controller Manager is expected to be installed within the cluster. https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager Valid values are "External", "None" and omitted. When set to "External", new nodes will be tainted as uninitialized when created, preventing them from running workloads until they are initialized by the cloud controller manager. When omitted or set to "None", new nodes will be not tainted and no extra initialization from the cloud controller manager is expected. 15.1.57. .status.platformStatus.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object Property Type Description projectID string resourceGroupName is the Project ID for new GCP resources created for the cluster. region string region holds the region for new GCP resources created for the cluster. resourceLabels array resourceLabels is a list of additional labels to apply to GCP resources created for the cluster. See https://cloud.google.com/compute/docs/labeling-resources for information on labeling GCP resources. GCP supports a maximum of 64 labels per resource. OpenShift reserves 32 labels for internal use, allowing 32 labels for user configuration. resourceLabels[] object GCPResourceLabel is a label to apply to GCP resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to GCP resources created for the cluster. See https://cloud.google.com/resource-manager/docs/tags/tags-overview for information on tagging GCP resources. GCP supports a maximum of 50 tags per resource. resourceTags[] object GCPResourceTag is a tag to apply to GCP resources created for the cluster. 15.1.58. .status.platformStatus.gcp.resourceLabels Description resourceLabels is a list of additional labels to apply to GCP resources created for the cluster. See https://cloud.google.com/compute/docs/labeling-resources for information on labeling GCP resources. GCP supports a maximum of 64 labels per resource. OpenShift reserves 32 labels for internal use, allowing 32 labels for user configuration. Type array 15.1.59. .status.platformStatus.gcp.resourceLabels[] Description GCPResourceLabel is a label to apply to GCP resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the label. A label key can have a maximum of 63 characters and cannot be empty. Label key must begin with a lowercase letter, and must contain only lowercase letters, numeric characters, and the following special characters _- . Label key must not have the reserved prefixes kubernetes-io and openshift-io . value string value is the value part of the label. A label value can have a maximum of 63 characters and cannot be empty. Value must contain only lowercase letters, numeric characters, and the following special characters _- . 15.1.60. .status.platformStatus.gcp.resourceTags Description resourceTags is a list of additional tags to apply to GCP resources created for the cluster. See https://cloud.google.com/resource-manager/docs/tags/tags-overview for information on tagging GCP resources. GCP supports a maximum of 50 tags per resource. Type array 15.1.61. .status.platformStatus.gcp.resourceTags[] Description GCPResourceTag is a tag to apply to GCP resources created for the cluster. Type object Required key parentID value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 63 characters and cannot be empty. Tag key must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters ._- . parentID string parentID is the ID of the hierarchical resource where the tags are defined, e.g. at the Organization or the Project level. To find the Organization or Project ID refer to the following pages: https://cloud.google.com/resource-manager/docs/creating-managing-organization#retrieving_your_organization_id , https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects . An OrganizationID must consist of decimal numbers, and cannot have leading zeroes. A ProjectID must be 6 to 30 characters in length, can only contain lowercase letters, numbers, and hyphens, and must start with a letter, and cannot end with a hyphen. value string value is the value part of the tag. A tag value can have a maximum of 63 characters and cannot be empty. Tag value must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters _-.@%=+:,*#&(){}[] and spaces. 15.1.62. .status.platformStatus.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain location string Location is where the cluster has been deployed providerType string ProviderType indicates the type of cluster that was created resourceGroupName string ResourceGroupName is the Resource Group for new IBMCloud resources created for the cluster. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of an IBM Cloud service. These endpoints are consumed by components within the cluster to reach the respective IBM Cloud Services. serviceEndpoints[] object IBMCloudServiceEndpoint stores the configuration of a custom url to override existing defaults of IBM Cloud Services. 15.1.63. .status.platformStatus.ibmcloud.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of an IBM Cloud service. These endpoints are consumed by components within the cluster to reach the respective IBM Cloud Services. Type array 15.1.64. .status.platformStatus.ibmcloud.serviceEndpoints[] Description IBMCloudServiceEndpoint stores the configuration of a custom url to override existing defaults of IBM Cloud Services. Type object Required name url Property Type Description name string name is the name of the IBM Cloud service. Possible values are: CIS, COS, COSConfig, DNSServices, GlobalCatalog, GlobalSearch, GlobalTagging, HyperProtect, IAM, KeyProtect, ResourceController, ResourceManager, or VPC. For example, the IBM Cloud Private IAM service could be configured with the service name of IAM and url of https://private.iam.cloud.ibm.com Whereas the IBM Cloud Private VPC service for US South (Dallas) could be configured with the service name of VPC and url of https://us.south.private.iaas.cloud.ibm.com url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.65. .status.platformStatus.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.66. .status.platformStatus.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. 15.1.67. .status.platformStatus.nutanix.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on Nutanix platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.68. .status.platformStatus.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. cloudName string cloudName is the name of the desired OpenStack cloud in the client configuration file ( clouds.yaml ). ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for OpenStack deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.69. .status.platformStatus.openstack.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on OpenStack platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.70. .status.platformStatus.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. nodeDNSIP string deprecated: as of 4.6, this field is no longer set or honored. It will be removed in a future release. 15.1.71. .status.platformStatus.ovirt.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on Ovirt platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.1.72. .status.platformStatus.powervs Description PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain region string region holds the default Power VS region for new Power VS resources created by the cluster. resourceGroup string resourceGroup is the resource group name for new IBMCloud resources created for a cluster. The resource group specified here will be used by cluster-image-registry-operator to set up a COS Instance in IBMCloud for the cluster registry. More about resource groups can be found here: https://cloud.ibm.com/docs/account?topic=account-rgs . When omitted, the image registry operator won't be able to configure storage, which results in the image registry cluster operator not being in an available state. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. zone string zone holds the default zone for the new Power VS resources created by the cluster. Note: Currently only single-zone OCP clusters are supported 15.1.73. .status.platformStatus.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.74. .status.platformStatus.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.75. .status.platformStatus.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. loadBalancer object loadBalancer defines how the load balancer used by the cluster is configured. machineNetworks array (string) machineNetworks are IP networks used to connect all the OpenShift cluster nodes. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for vSphere deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.76. .status.platformStatus.vsphere.loadBalancer Description loadBalancer defines how the load balancer used by the cluster is configured. Type object Property Type Description type string type defines the type of load balancer used by the cluster on VSphere platform which can be a user-managed or openshift-managed load balancer that is to be used for the OpenShift API and Ingress endpoints. When set to OpenShiftManagedDefault the static pods in charge of API and Ingress traffic load-balancing defined in the machine config operator will be deployed. When set to UserManaged these static pods will not be deployed and it is expected that the load balancer is configured out of band by the deployer. When omitted, this means no opinion and the platform is left to choose a reasonable default. The default value is OpenShiftManagedDefault. 15.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/infrastructures DELETE : delete collection of Infrastructure GET : list objects of kind Infrastructure POST : create an Infrastructure /apis/config.openshift.io/v1/infrastructures/{name} DELETE : delete an Infrastructure GET : read the specified Infrastructure PATCH : partially update the specified Infrastructure PUT : replace the specified Infrastructure /apis/config.openshift.io/v1/infrastructures/{name}/status GET : read status of the specified Infrastructure PATCH : partially update status of the specified Infrastructure PUT : replace status of the specified Infrastructure 15.2.1. /apis/config.openshift.io/v1/infrastructures HTTP method DELETE Description delete collection of Infrastructure Table 15.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Infrastructure Table 15.2. HTTP responses HTTP code Reponse body 200 - OK InfrastructureList schema 401 - Unauthorized Empty HTTP method POST Description create an Infrastructure Table 15.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.4. Body parameters Parameter Type Description body Infrastructure schema Table 15.5. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 202 - Accepted Infrastructure schema 401 - Unauthorized Empty 15.2.2. /apis/config.openshift.io/v1/infrastructures/{name} Table 15.6. Global path parameters Parameter Type Description name string name of the Infrastructure HTTP method DELETE Description delete an Infrastructure Table 15.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 15.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Infrastructure Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Infrastructure Table 15.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.11. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Infrastructure Table 15.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.13. Body parameters Parameter Type Description body Infrastructure schema Table 15.14. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty 15.2.3. /apis/config.openshift.io/v1/infrastructures/{name}/status Table 15.15. Global path parameters Parameter Type Description name string name of the Infrastructure HTTP method GET Description read status of the specified Infrastructure Table 15.16. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Infrastructure Table 15.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.18. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Infrastructure Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Infrastructure schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/infrastructure-config-openshift-io-v1
Service Mesh
Service Mesh OpenShift Container Platform 4.7 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: true PILOT_ENABLE_GATEWAY_API_STATUS: true # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: techPreview: global: pathNormalization: <option>", "oc create -f <myEnvoyFilterFile>", "apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end", "api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"", "spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.2 tracing: type: Jaeger sampling: 10000 addons: jaeger: name: jaeger install: storage: type: Memory kiali: enabled: true name: kiali grafana: enabled: true", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m wasm-cacher-basic-8c986c75-vj2cd 1/1 Running 0 65m", "oc login https://<HOSTNAME>:6443", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.1 66m", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.2 security: identity: type: ThirdParty #required setting for ROSA tracing: type: Jaeger sampling: 10000 policy: type: Istiod addons: grafana: enabled: true jaeger: install: storage: type: Memory kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: auth: strategy: openshift deployment: accessible_namespaces: #restricted setting for ROSA - istio-system image_pull_policy: '' ingress_enabled: true namespace: istio-system", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.2", "oc project istio-system", "oc get smcp -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2", "oc get smcp -o yaml", "oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml", "oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'", "oc edit smcp.v1.maistra.io <smcp_name>", "oc project istio-system", "oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml", "oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml", "oc new-project istio-system-upgrade", "oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml", "spec: policy: type: Mixer", "spec: telemetry: type: Mixer", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN", "#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check", "spec: tracing: sampling: 100 # 1% type: Jaeger", "spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"", "spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install", "oc rollout restart <deployment>", "oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic", "oc policy add-role-to-user", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.2 security: dataPlane: mtls: true", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT", "oc create -n <namespace> -f <policy.yaml>", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "oc create -n <namespace> -f <destination-rule.yaml>", "kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]", "oc create -n istio-system -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]", "apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"", "apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts", "oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'", "oc -n bookinfo delete pods --all", "pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted", "oc get pods -n bookinfo", "sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n istio-system get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: maistra.io/v1alpha1 kind: metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false", "apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"", "oc apply -f sidecar.yaml", "oc get sidecar", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc get routes", "NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect", "curl \"http://USDGATEWAY_URL/productpage\"", "spec: addons: jaeger: name: distr-tracing-production", "spec: tracing: sampling: 100", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.2 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "oc get smcp basic -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.2 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local", "spec: cluster: name:", "spec: cluster: network:", "spec: gateways: additionalEgress: <egressName>:", "spec: gateways: additionalEgress: <egressName>: enabled:", "spec: gateways: additionalEgress: <egressName>: requestedNetworkView:", "spec: gateways: additionalEgress: <egressName>: routerMode:", "spec: gateways: additionalEgress: <egressName>: service: metadata: labels: federation.maistra.io/egress-for:", "spec: gateways: additionalEgress: <egressName>: service: ports:", "spec: gateways: additionalIngress:", "spec: gateways: additionalIgress: <ingressName>: enabled:", "spec: gateways: additionalIngress: <ingressName>: routerMode:", "spec: gateways: additionalIngress: <ingressName>: service: type:", "spec: gateways: additionalIngress: <ingressName>: service: type:", "spec: gateways: additionalIngress: <ingressName>: service: metadata: labels: federation.maistra.io/ingress-for:", "spec: gateways: additionalIngress: <ingressName>: service: ports:", "spec: gateways: additionalIngress: <ingressName>: service: ports: nodePort:", "gateways: additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery", "kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local", "spec: security: trust: domain:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project red-mesh-system", "oc edit -n red-mesh-system smcp red-mesh", "oc get smcp -n red-mesh-system", "NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "metadata: name:", "metadata: namespace:", "spec: remote: addresses:", "spec: remote: discoveryPort:", "spec: remote: servicePort:", "spec: gateways: ingress: name:", "spec: gateways: egress: name:", "spec: security: trustDomain:", "spec: security: clientID:", "spec: security: certificateChain: kind: ConfigMap name:", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "oc create -n red-mesh-system -f servicemeshpeer.yaml", "oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml", "status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo", "metadata: name:", "metadata: namespace:", "spec: exportRules: - type:", "spec: exportRules: - type: NameSelector nameSelector: namespace: name:", "spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews", "oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>", "oc create -n red-mesh-system -f export-to-green-mesh.yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc get exportedserviceset green-mesh -o yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc -n red-mesh-system get exportedserviceset green-mesh -o yaml", "status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings", "metadata: name:", "metadata: namespace:", "spec: importRules: - type:", "spec: importRules: - type: NameSelector nameSelector: namespace: name:", "spec: importRules: - type: NameSelector importAsLocal:", "spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project green-mesh-system", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings", "oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>", "oc create -n green-mesh-system -f import-from-red-mesh.yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc get importedserviceset green-mesh -o yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc -n green-mesh-system get importedserviceset/red-mesh -o yaml", "status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>", "oc edit -n green-mesh-system -f import-from-red-mesh.yaml", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m", "oc create -n <application namespace> -f <DestinationRule.yaml>", "oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "oc apply -f plugin.yaml", "schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100", "oc apply -f <extension>.yaml", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth namespace: bookinfo 1 spec: workloadSelector: 2 labels: app: productpage config: <yaml_configuration> image: registry.redhat.io/openshift-service-mesh/3scale-auth-wasm-rhel8:0.0.1 phase: PostAuthZ priority: 100", "oc apply -f threescale-wasm-auth-bookinfo.yaml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "oc apply -f <filename.yml>", "echo -n \"<filename.yml>\" | oc apply -f -", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth namespace: bookinfo spec: config: api: v1", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: system: name: saas_porta upstream: <object> token: myaccount_token ttl: 300", "apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: backend: name: backend upstream: <object>", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-wasm-auth spec: config: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1", "credentials: user_key: - query_string: keys: - user_key - header: keys: - user_key", "credentials: app_id: - header: keys: - app_id - query_string: keys: - app_id app_key: - header: keys: - app_key - query_string: keys: - app_key", "aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l", "credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key", "credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1", "credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: threescale-auth spec: image: registry.redhat.io/openshift-service-mesh/3scale-auth-wasm-rhel8:0.0.1 phase: PostAuthZ priority: 100 workloadSelector: labels: app: productpage config: api: v1 system: name: system-name upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: atoken backend: name: backend-name upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' token: service_token authorities: - \"*\" credentials: app_id: - header: keys: - app_id - query_string: keys: - app_id - application_id app_key: - header: keys: - app_key - query_string: keys: - app_key - application_key user_key: - query_string: keys: - user_key - header: keys: - user_key mapping_rules: - method: GET pattern: \"/\" usages: - name: Hits delta: 1 - method: GET pattern: \"/o{*}c\" usages: - name: oidc delta: 1 - name: Hits delta: 1 - method: any pattern: \"/{anything}?bigsale={*}\" usages: - name: sale delta: 5", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n <istio-system>", "oc logs <istio-system>", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s", "oc logs -n openshift-operators <podName>", "oc logs -n openshift-operators istio-operator-bb49787db-zgr87", "oc get pods -n istio-system", "NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s wasm-cacher-basic-5b99bfcddb-m775l 1/1 Running 0 86s", "oc get smcp -n <istio-system>", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s", "NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h", "oc describe smcp <smcp-name> -n <controlplane-namespace>", "oc describe smcp basic -n istio-system", "oc get jaeger -n <istio-system>", "NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m", "oc get kiali -n <istio-system>", "NAME AGE kiali 15m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc edit smcp <smcp_name>", "spec: proxy: accessLogging: file: name: /dev/stdout #file name", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 gather <namespace>", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true", "logging:", "logging: componentLevels:", "logging: logLevels:", "logging: logAsJSON:", "validationMessages:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 100 type: Jaeger", "tracing: sampling:", "tracing: type:", "spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali", "spec: addons: kiali: name:", "kiali: enabled:", "kiali: install:", "kiali: install: dashboard:", "kiali: install: dashboard: viewOnly:", "kiali: install: dashboard: enableGrafana:", "kiali: install: dashboard: enablePrometheus:", "kiali: install: dashboard: enableTracing:", "kiali: install: service:", "kiali: install: service: metadata:", "kiali: install: service: metadata: annotations:", "kiali: install: service: metadata: labels:", "kiali: install: service: ingress:", "kiali: install: service: ingress: metadata: annotations:", "kiali: install: service: ingress: metadata: labels:", "kiali: install: service: ingress: enabled:", "kiali: install: service: ingress: contextPath:", "install: service: ingress: hosts:", "install: service: ingress: tls:", "kiali: install: service: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 100 type: Jaeger", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.2 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc login https://<HOSTNAME>:6443", "oc project istio-system", "oc edit -n tracing-system -f jaeger.yaml", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc apply -n tracing-system -f <jaeger.yaml>", "oc get pods -n tracing-system -w", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete svc maistra-admission-controller -n openshift-operators", "oc -n openshift-operators delete ds -lmaistra-version", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete cm -n openshift-operators maistra-operator-cabundle", "oc delete cm -n openshift-operators istio-cni-config", "oc delete sa -n openshift-operators istio-cni", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n <istio-system> istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n <istio-system> istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.2/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n <istio-system>", "oc logs <istio-system>", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/service_mesh/index
9.8. Encrypting the Database
9.8. Encrypting the Database Information is stored in a database in plain text. Consequently, some extremely sensitive information, such as government identification numbers or passwords, may not be sufficiently protected by access control measures. It may be possible to gain access to a server's persistent storage files, either directly through the file system or by accessing discarded disk drives or archive media. Database encryption allows individual attributes to be encrypted as they are stored in the database. When configured, every instance of a particular attribute, even index data, is encrypted and can only be accessed using a secure channel, such as TLS. For information on using database encryption, see the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory-Database_Encryption
Chapter 2. About Serverless
Chapter 2. About Serverless 2.1. OpenShift Serverless overview OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project , which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. Note Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation does not maintain separate documentation sets for minor versions of the product. The current documentation set applies to all currently supported versions of OpenShift Serverless unless version-specific limitations are called out in a particular topic or for a particular feature. For additional information about the OpenShift Serverless life cycle and supported platforms, refer to the Platform Life Cycle Policy . 2.1.1. Additional resources Extending the Kubernetes API with custom resource definitions Managing resources from custom resource definitions What is serverless? 2.2. Knative Serving Knative Serving supports developers who want to create, deploy, and manage cloud-native applications . It provides a set of objects as Kubernetes custom resource definitions (CRDs) that define and control the behavior of serverless workloads on an OpenShift Container Platform cluster. Developers use these CRDs to create custom resource (CR) instances that can be used as building blocks to address complex use cases. For example: Rapidly deploying serverless containers. Automatically scaling pods. 2.2.1. Knative Serving resources Service The service.serving.knative.dev CRD automatically manages the life cycle of your workload to ensure that the application is deployed and reachable through the network. It creates a route, a configuration, and a new revision for each change to a user created service, or custom resource. Most developer interactions in Knative are carried out by modifying services. Revision The revision.serving.knative.dev CRD is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as necessary. Route The route.serving.knative.dev CRD maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes. Configuration The configuration.serving.knative.dev CRD maintains the desired state for your deployment. It provides a clean separation between code and configuration. Modifying a configuration creates a new revision. 2.3. Knative Eventing Knative Eventing on OpenShift Container Platform enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers and event consumers. Event producers create events, and event sinks , or consumers, receive events. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. These events conform to the CloudEvents specifications , which enables creating, parsing, sending, and receiving events in any programming language. Knative Eventing supports the following use cases: Publish an event without creating a consumer You can send events to a broker as an HTTP POST, and use binding to decouple the destination configuration from your application that produces events. Consume an event without creating a publisher You can use a trigger to consume events from a broker based on event attributes. The application receives events as an HTTP POST. To enable delivery to multiple types of sinks, Knative Eventing defines the following generic interfaces that can be implemented by multiple Kubernetes resources: Addressable resources Able to receive and acknowledge an event delivered over HTTP to an address defined in the status.address.url field of the event. The Kubernetes Service resource also satisfies the addressable interface. Callable resources Able to receive an event delivered over HTTP and transform it, returning 0 or 1 new events in the HTTP response payload. These returned events may be further processed in the same way that events from an external event source are processed. 2.3.1. Using Knative Kafka Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities. Note Knative Kafka is not currently supported for IBM Z and IBM Power. Knative Kafka provides additional options, such as: Kafka source Kafka channel Kafka broker Kafka sink 2.3.2. Additional resources Installing the KnativeKafka custom resource . Red Hat AMQ Streams documentation Red Hat AMQ Streams TLS and SASL on Kafka documentation Event delivery 2.4. About OpenShift Serverless Functions OpenShift Serverless Functions enables developers to create and deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. The kn func CLI is provided as a plugin for the Knative kn CLI. You can use the kn func CLI to create, build, and deploy the container image as a Knative service on the cluster. 2.4.1. Included runtimes OpenShift Serverless Functions provides templates that can be used to create basic functions for the following runtimes: Quarkus Node.js TypeScript 2.4.2. steps Getting started with functions .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/about-serverless-1
Chapter 2. Getting started
Chapter 2. Getting started 2.1. Maintenance and support for monitoring Not all configuration options for the monitoring stack are exposed. The only supported way of configuring OpenShift Container Platform monitoring is by configuring the Cluster Monitoring Operator (CMO) using the options described in the Config map reference for the Cluster Monitoring Operator . Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the Config map reference for the Cluster Monitoring Operator , your changes will disappear because the CMO automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. 2.1.1. Support considerations for monitoring Note Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed. The following modifications are explicitly not supported: Creating additional ServiceMonitor , PodMonitor , and PrometheusRule objects in the openshift-* and kube-* projects. Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility. Note The Alertmanager configuration is deployed as the alertmanager-main secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as the alertmanager-user-workload secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement. Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them. Deploying user-defined workloads to openshift-* , and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator. Manually deploying monitoring resources into namespaces that have the openshift.io/cluster-monitoring: "true" label. Adding the openshift.io/cluster-monitoring: "true" label to namespaces. This label is reserved only for the namespaces with core OpenShift Container Platform components and Red Hat certified components. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. 2.1.2. Support policy for monitoring Operators Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates. While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. Overriding the Cluster Version Operator The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed. 2.1.3. Support version matrix for monitoring components The following matrix contains information about versions of monitoring components for OpenShift Container Platform 4.12 and later releases: Table 2.1. OpenShift Container Platform and component versions OpenShift Container Platform Prometheus Operator Prometheus Metrics Server Alertmanager kube-state-metrics agent monitoring-plugin node-exporter agent Thanos 4.18 0.78.1 2.55.1 0.7.2 0.27.0 2.13.0 1.0.0 1.8.2 0.36.1 4.17 0.75.2 2.53.1 0.7.1 0.27.0 2.13.0 1.0.0 1.8.2 0.35.1 4.16 0.73.2 2.52.0 0.7.1 0.26.0 2.12.0 1.0.0 1.8.0 0.35.0 4.15 0.70.0 2.48.0 0.6.4 0.26.0 2.10.1 1.0.0 1.7.0 0.32.5 4.14 0.67.1 2.46.0 N/A 0.25.0 2.9.2 1.0.0 1.6.1 0.30.2 4.13 0.63.0 2.42.0 N/A 0.25.0 2.8.1 N/A 1.5.0 0.30.2 4.12 0.60.1 2.39.1 N/A 0.24.0 2.6.0 N/A 1.4.0 0.28.1 Note The openshift-state-metrics agent and Telemeter Client are OpenShift-specific components. Therefore, their versions correspond with the versions of OpenShift Container Platform. 2.2. Core platform monitoring first steps After OpenShift Container Platform is installed, core platform monitoring components immediately begin collecting metrics, which you can query and view. The default in-cluster monitoring stack includes the core platform Prometheus instance that collects metrics from your cluster and the core Alertmanager instance that routes alerts, among other components. Depending on who will use the monitoring stack and for what purposes, as a cluster administrator, you can further configure these monitoring components to suit the needs of different users in various scenarios. 2.2.1. Configuring core platform monitoring: Postinstallation steps After OpenShift Container Platform is installed, cluster administrators typically configure core platform monitoring to suit their needs. These activities include setting up storage and configuring options for Prometheus, Alertmanager, and other monitoring components. Note By default, in a newly installed OpenShift Container Platform system, users can query and view collected metrics. You need only configure an alert receiver if you want users to receive alert notifications. Any other configuration options listed here are optional. Create the cluster-monitoring-config ConfigMap object if it does not exist. Configure notifications for default platform alerts so that Alertmanager can send alerts to an external notification system such as email, Slack, or PagerDuty. For shorter term data retention, configure persistent storage for Prometheus and Alertmanager to store metrics and alert data. Specify the metrics data retention parameters for Prometheus and Thanos Ruler. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. By default, in a newly installed OpenShift Container Platform system, the monitoring ClusterOperator resource reports a PrometheusDataPersistenceNotConfigured status message to remind you that storage is not configured. For longer term data retention, configure the remote write feature to enable Prometheus to send ingested metrics to remote systems for storage. Important Be sure to add cluster ID labels to metrics for use with your remote write storage configuration. Grant monitoring cluster roles to any non-administrator users that need to access certain monitoring features. Assign tolerations to monitoring stack components so that administrators can move them to tainted nodes. Set the body size limit for metrics collection to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. Modify or create alerting rules for your cluster. These rules specify the conditions that trigger alerts, such as high CPU or memory usage, network latency, and so forth. Specify resource limits and requests for monitoring components to ensure that the containers that run monitoring components have enough CPU and memory resources. With the monitoring stack configured to suit your needs, Prometheus collects metrics from the specified services and stores these metrics according to your settings. You can go to the Observe pages in the OpenShift Container Platform web console to view and query collected metrics, manage alerts, identify performance bottlenecks, and scale resources as needed: View dashboards to visualize collected metrics, troubleshoot alerts, and monitor other information about your cluster. Query collected metrics by creating PromQL queries or using predefined queries. 2.3. User workload monitoring first steps As a cluster administrator, you can optionally enable monitoring for user-defined projects in addition to core platform monitoring. Non-administrator users such as developers can then monitor their own projects outside of core platform monitoring. Cluster administrators typically complete the following activities to configure user-defined projects so that users can view collected metrics, query these metrics, and receive alerts for their own projects: Enable user workload monitoring . Grant non-administrator users permissions to monitor user-defined projects by assigning the monitoring-rules-view , monitoring-rules-edit , or monitoring-edit cluster roles. Assign the user-workload-monitoring-config-edit role to grant non-administrator users permission to configure user-defined projects. Enable alert routing for user-defined projects so that developers and other users can configure custom alerts and alert routing for their projects. If needed, configure alert routing for user-defined projects to use an optional Alertmanager instance dedicated for use only by user-defined projects . Configure notifications for user-defined alerts . If you use the platform Alertmanager instance for user-defined alert routing, configure different alert receivers for default platform alerts and user-defined alerts. 2.4. Developer and non-administrator steps After monitoring for user-defined projects is enabled and configured, developers and other non-administrator users can then perform the following activities to set up and use monitoring for their own projects: Deploy and monitor services . Create and manage alerting rules . Receive and manage alerts for your projects. If granted the alert-routing-edit cluster role, configure alert routing . View dashboards by using the OpenShift Container Platform web console. Query the collected metrics by creating PromQL queries or using predefined queries.
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/getting-started
Chapter 3. Deploying a virt-who configuration
Chapter 3. Deploying a virt-who configuration After you create a virt-who configuration, Satellite provides a script to automate the deployment process. The script installs virt-who and creates the individual and global virt-who configuration files. For Red Hat products, you must deploy each configuration file on the hypervisor specified in the file. For other products, you must deploy the configuration files on Satellite Server, Capsule Server, or a separate Red Hat Enterprise Linux server that is dedicated to running virt-who. To deploy the files on a hypervisor or Capsule Server, see Section 3.1, "Deploying a virt-who configuration on a hypervisor" . To deploy the files on Satellite Server, see Section 3.2, "Deploying a virt-who configuration on Satellite Server" . To deploy the files on a separate Red Hat Enterprise Linux server, see Section 3.3, "Deploying a virt-who configuration on a separate Red Hat Enterprise Linux server" . 3.1. Deploying a virt-who configuration on a hypervisor Use this procedure to deploy a virt-who configuration on the Red Hat hypervisor that you specified in the file. Global values apply only to this hypervisor. You can also use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Capsule Server. Global configuration values apply to all virt-who configurations on the same Capsule Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Register the hypervisor to Red Hat Satellite. If you are using Red Hat Virtualization Host (RHVH), update it to the latest version so that the minimum virt-who version is available. Virt-who is available by default on RHVH, but cannot be updated individually from the rhel-7-server-rhvh-4-rpms repository. Create a read-only virt-who user on the hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the hypervisor: Make the deployment script executable and run it: After the deployment is complete, delete the script: 3.2. Deploying a virt-who configuration on Satellite Server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Satellite Server. Global configuration values apply to all virt-who configurations on Satellite Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Under Hammer command , click Copy to clipboard . On Satellite Server, paste the Hammer command into your terminal. 3.3. Deploying a virt-who configuration on a separate Red Hat Enterprise Linux server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on a dedicated Red Hat Enterprise Linux 7 server. The server can be physical or virtual. Global configuration values apply to all virt-who configurations on this server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure On the Red Hat Enterprise Linux server, install Satellite Server's CA certificate: Register the Red Hat Enterprise Linux server to Satellite Server: Open a network port for communication between virt-who and Satellite Server: Open a network port for communication between virt-who and each hypervisor or virtualization manager: VMware vCenter: TCP port 443 Microsoft Hyper-V: TCP port 5985 In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration file. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the Red Hat Enterprise Linux server: Make the deployment script executable and run it: After the deployment is complete, delete the script:
[ "scp deploy_virt_who_config_1 .sh root@ hypervisor.example.com :", "chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh", "rm deploy_virt_who_config_1", "rpm -ivh http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm", "subscription-manager register --org= organization_label --auto-attach", "firewall-cmd --add-port=\"443/tcp\" firewall-cmd --add-port=\"443/tcp\" --permanent", "scp deploy_virt_who_config_1 .sh root@ rhel.example.com :", "chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh", "rm deploy_virt_who_config_1" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_virtual_machine_subscriptions/deploying-a-virt-who-configuration
Deploying Red Hat Insights on existing RHEL systems managed by Red Hat Update Infrastructure
Deploying Red Hat Insights on existing RHEL systems managed by Red Hat Update Infrastructure Red Hat Insights 1-latest Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_red_hat_insights_on_existing_rhel_systems_managed_by_red_hat_update_infrastructure/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_nodes/making-open-source-more-inclusive
Chapter 2. Configuring the Compute service (nova)
Chapter 2. Configuring the Compute service (nova) To designate and configure all node sets for a particular feature or workload, the Compute service (nova) provides a default ConfigMap CR named nova-extra-config , where you can add generic configuration that applies to all the node sets that use the default nova service. If you use this default nova-extra-config ConfigMap to add generic configuration to be applied to all the node sets, then you do not need to create a custom service. Example of a generic feature configuration: Replace <integer> with an integer that indicates when the configuration should be applied in the series of configuration files that are applied to etc/<service>/<service>.conf.d/ in the <service> container when the service is deployed. Numbers below 25 are reserved for the OpenStack services and Ansible configuration files. Replace <service> with the name of the service. Replace <feature> with a string that identifies the feature. Replace <section> with the section to which you want to add the configuration. You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must split the node set via scaling in the current node set, remove the scaled in nodes from the node set, and add them to a new node set. If your deployment has more than one node set, changes to the nova-extra-config ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. Procedure Create or update the default ConfigMap CR named nova-extra-config . Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes. Specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Deploy the data plane. Additional resources Customizing the data plane in Customizing the Red Hat OpenStack Services on OpenShift deployment
[ "apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: <integer>-<service>-<feature>.conf: | [<section>] <config_option>=<value>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-the-compute-service_osp
Preface
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_apache_camel/pr01
20.5. Removing Keytabs
20.5. Removing Keytabs Refreshing Kerberos tickets adds a new key to the keytab, but it does not clear the keytab. If a host is being unenrolled and re-added to the IdM domain or if there are Kerberos connection errors, then it may be necessary to remove the keytab and create a new keytab. This is done using the ipa-rmkeytab command. To remove all principals on the host, specify the realm with the -r option: To remove the keytab for a specific service, use the -p option to specify the service principal:
[ "ipa-rmkeytab -r EXAMPLE.COM -k /etc/krb5.keytab", "ipa-rmkeytab -p ldap/client.example.com -k /etc/krb5.keytab" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/removing-keytabs
Appendix B. Contact information
Appendix B. Contact information Red Hat Process Automation Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/author-group
14.4. Configuring a Multihomed DHCP Server
14.4. Configuring a Multihomed DHCP Server A multihomed DHCP server serves multiple networks, that is, multiple subnets. The examples in these sections detail how to configure a DHCP server to serve multiple networks, select which network interfaces to listen on, and how to define network settings for systems that move networks. Before making any changes, back up the existing /etc/dhcp/dhcpd.conf file. The DHCP daemon will only listen on interfaces for which it finds a subnet declaration in the /etc/dhcp/dhcpd.conf file. The following is a basic /etc/dhcp/dhcpd.conf file, for a server that has two network interfaces, enp1s0 in a 10.0.0.0/24 network, and enp2s0 in a 172.16.0.0/24 network. Multiple subnet declarations allow you to define different settings for multiple networks: subnet 10.0.0.0 netmask 255.255.255.0 ; A subnet declaration is required for every network your DHCP server is serving. Multiple subnets require multiple subnet declarations. If the DHCP server does not have a network interface in a range of a subnet declaration, the DHCP server does not serve that network. If there is only one subnet declaration, and no network interfaces are in the range of that subnet, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : option subnet-mask 255.255.255.0 ; The option subnet-mask option defines a subnet mask, and overrides the netmask value in the subnet declaration. In simple cases, the subnet and netmask values are the same. option routers 10.0.0.1 ; The option routers option defines the default gateway for the subnet. This is required for systems to reach internal networks on a different subnet, as well as external networks. range 10.0.0.5 10.0.0.15 ; The range option specifies the pool of available IP addresses. Systems are assigned an address from the range of specified IP addresses. For further information, see the dhcpd.conf(5) man page. Warning To avoid misconfiguration when DHCP server gives IP addresses from one IP range to another physical Ethernet segment, make sure you do not enclose more subnets in a shared-network declaration. 14.4.1. Host Configuration Before making any changes, back up the existing /etc/sysconfig/dhcpd and /etc/dhcp/dhcpd.conf files. Configuring a Single System for Multiple Networks The following /etc/dhcp/dhcpd.conf example creates two subnets, and configures an IP address for the same system, depending on which network it connects to: host example0 The host declaration defines specific parameters for a single system, such as an IP address. To configure specific parameters for multiple hosts, use multiple host declarations. Most DHCP clients ignore the name in host declarations, and as such, this name can be anything, as long as it is unique to other host declarations. To configure the same system for multiple networks, use a different name for each host declaration, otherwise the DHCP daemon fails to start. Systems are identified by the hardware ethernet option, not the name in the host declaration. hardware ethernet 00:1A:6B:6A:2E:0B ; The hardware ethernet option identifies the system. To find this address, run the ip link command. fixed-address 10.0.0.20 ; The fixed-address option assigns a valid IP address to the system specified by the hardware ethernet option. This address must be outside the IP address pool specified with the range option. If option statements do not end with a semicolon, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : Configuring Systems with Multiple Network Interfaces The following host declarations configure a single system, which has multiple network interfaces, so that each interface receives the same IP address. This configuration will not work if both network interfaces are connected to the same network at the same time: For this example, interface0 is the first network interface, and interface1 is the second interface. The different hardware ethernet options identify each interface. If such a system connects to another network, add more host declarations, remembering to: assign a valid fixed-address for the network the host is connecting to. make the name in the host declaration unique. When a name given in a host declaration is not unique, the DHCP daemon fails to start, and an error such as the following is logged to /var/log/messages : This error was caused by having multiple host interface0 declarations defined in /etc/dhcp/dhcpd.conf .
[ "default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; }", "dhcpd: No subnet declaration for enp1s0 (0.0.0.0). dhcpd: ** Ignoring requests on enp1s0. If this is not what dhcpd: you want, please write a subnet declaration dhcpd: in your dhcpd.conf file for the network segment dhcpd: to which interface enp2s0 is attached. ** dhcpd: dhcpd: dhcpd: Not configured to listen on any interfaces!", "default-lease-time 600 ; max-lease-time 7200 ; subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 10.0.0.1; range 10.0.0.5 10.0.0.15; } subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option routers 172.16.0.1; range 172.16.0.5 172.16.0.15; } host example0 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 10.0.0.20; } host example1 { hardware ethernet 00:1A:6B:6A:2E:0B; fixed-address 172.16.0.20; }", "/etc/dhcp/dhcpd.conf line 20: semicolon expected. dhcpd: } dhcpd: ^ dhcpd: /etc/dhcp/dhcpd.conf line 38: unexpected end of file dhcpd: dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting", "host interface0 { hardware ethernet 00:1a:6b:6a:2e:0b; fixed-address 10.0.0.18; } host interface1 { hardware ethernet 00:1A:6B:6A:27:3A; fixed-address 10.0.0.18; }", "dhcpd: /etc/dhcp/dhcpd.conf line 31: host interface0: already exists dhcpd: } dhcpd: ^ dhcpd: Configuration file errors encountered -- exiting" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_a_Multihomed_DHCP_Server
Chapter 8. Configuring the OpenTelemetry Collector metrics
Chapter 8. Configuring the OpenTelemetry Collector metrics The following list shows some of these metrics: Collector memory usage CPU utilization Number of active traces and spans processed Dropped spans, logs, or metrics Exporter and receiver statistics The Red Hat build of OpenTelemetry Operator automatically creates a service named <instance_name>-collector-monitoring that exposes the Collector's internal metrics. This service listens on port 8888 by default. You can use these metrics for monitoring the Collector's performance, resource consumption, and other internal behaviors. You can also use a Prometheus instance or another monitoring tool to scrape these metrics from the mentioned <instance_name>-collector-monitoring service. Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of an OpenTelemetry Collector instance, set the spec.observability.metrics.enableMetrics field to true : apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets . Filter by Source: User . Check that the ServiceMonitors or PodMonitors in the opentelemetry-collector-<instance_name> format have the Up status. Additional resources Enabling monitoring for user-defined projects
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: <name> spec: observability: metrics: enableMetrics: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/red_hat_build_of_opentelemetry/otel-configuring-metrics
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/proc_providing-feedback-on-red-hat-documentation_managing-smart-card-authentication
Chapter 7. Security
Chapter 7. Security AMQ JMS has a range of security-related configuration options that can be leveraged according to your application's needs. Basic user credentials such as username and password should be passed directly to the ConnectionFactory when creating the Connection within the application. However, if you are using the no-argument factory method, it is also possible to supply user credentials in the connection URI. For more information, see the Section 5.1, "JMS options" section. Another common security consideration is use of SSL/TLS. The client connects to servers over an SSL/TLS transport when the amqps URI scheme is specified in the connection URI , with various options available to configure behavior. For more information, see the Section 5.3, "SSL/TLS options" section. In concert with the earlier items, it may be desirable to restrict the client to allow use of only particular SASL mechanisms from those that may be offered by a server, rather than selecting from all it supports. For more information, see the Section 5.4, "AMQP options" section. Applications calling getObject() on a received ObjectMessage may wish to restrict the types created during deserialization. Note that message bodies composed using the AMQP type system do not use the ObjectInputStream mechanism and therefore do not require this precaution. For more information, see the the section called "Deserialization policy options" section. 7.1. Enabling OpenSSL support SSL/TLS connections can be configured to use a native OpenSSL implementation for improved performance. To use OpenSSL, the transport.useOpenSSL option must be enabled, and an OpenSSL support library must be available on the classpath. To use the system-installed OpenSSL libraries on Red Hat Enterprise Linux, install the openssl and apr RPM packages and add the following dependency to your POM file: Example: Adding native OpenSSL support <dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.31.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency> A list of OpenSSL library implementations is available from the Netty project. 7.2. Authenticating using Kerberos The client can be configured to authenticate using Kerberos when used with an appropriately configured server. To enable Kerberos, use the following steps. Configure the client to use the GSSAPI mechanism for SASL authentication using the amqp.saslMechanisms URI option. Set the java.security.auth.login.config system property to the path of a JAAS login configuration file containing appropriate configuration for a Kerberos LoginModule . The login configuration file might look like the following example: The precise configuration used will depend on how you wish the credentials to be established for the connection, and the particular LoginModule in use. For details of the Oracle Krb5LoginModule , see the Oracle Krb5LoginModule class reference . For details of the IBM Java 8 Krb5LoginModule , see the IBM Krb5LoginModule class reference . It is possible to configure a LoginModule to establish the credentials to use for the Kerberos process, such as specifying a principal and whether to use an existing ticket cache or keytab. If, however, the LoginModule configuration does not provide the means to establish all necessary credentials, it may then request and be passed the username and password values from the client Connection object if they were either supplied when creating the Connection using the ConnectionFactory or previously configured via its URI options. Note that Kerberos is supported only for authentication purposes. Use SSL/TLS connections for encryption. The following connection URI options can be used to influence the Kerberos authentication process. sasl.options.configScope The name of the login configuration entry used to authenticate. The default is amqp-jms-client . sasl.options.protocol The protocol value used during the GSSAPI SASL process. The default is amqp . sasl.options.serverName The serverName value used during the GSSAPI SASL process. The default is the server hostname from the connection URI. Similar to the amqp. and transport. options detailed previously, these options must be specified on a per-host basis or as all-host nested options in a failover URI.
[ "<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.31.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency>", "amqp://myhost:5672?amqp.saslMechanisms=GSSAPI failover:(amqp://myhost:5672?amqp.saslMechanisms=GSSAPI)", "-Djava.security.auth.login.config=<login-config-file>", "amqp-jms-client { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/security
E.3.8. /proc/scsi/
E.3.8. /proc/scsi/ The primary file in this directory is /proc/scsi/scsi , which contains a list of every recognized SCSI device. From this listing, the type of device, as well as the model name, vendor, SCSI channel and ID data is available. For example, if a system contains a SCSI CD-ROM, a tape drive, a hard drive, and a RAID controller, this file looks similar to the following: Each SCSI driver used by the system has its own directory within /proc/scsi/ , which contains files specific to each SCSI controller using that driver. From the example, aic7xxx/ and megaraid/ directories are present, since two drivers are in use. The files in each of the directories typically contain an I/O address range, IRQ information, and statistics for the SCSI controller using that driver. Each controller can report a different type and amount of information. The Adaptec AIC-7880 Ultra SCSI host adapter's file in this example system produces the following output: This output reveals the transfer speed to the SCSI devices connected to the controller based on channel ID, as well as detailed statistics concerning the amount and sizes of files read or written by that device. For example, this controller is communicating with the CD-ROM at 20 megabytes per second, while the tape drive is only communicating at 10 megabytes per second.
[ "Attached devices: Host: scsi1 Channel: 00 Id: 05 Lun: 00 Vendor: NEC Model: CD-ROM DRIVE:466 Rev: 1.06 Type: CD-ROM ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 06 Lun: 00 Vendor: ARCHIVE Model: Python 04106-XXX Rev: 7350 Type: Sequential-Access ANSI SCSI revision: 02 Host: scsi2 Channel: 00 Id: 06 Lun: 00 Vendor: DELL Model: 1x6 U2W SCSI BP Rev: 5.35 Type: Processor ANSI SCSI revision: 02 Host: scsi2 Channel: 02 Id: 00 Lun: 00 Vendor: MegaRAID Model: LD0 RAID5 34556R Rev: 1.01 Type: Direct-Access ANSI SCSI revision: 02", "Adaptec AIC7xxx driver version: 5.1.20/3.2.4 Compile Options: TCQ Enabled By Default : Disabled AIC7XXX_PROC_STATS : Enabled AIC7XXX_RESET_DELAY : 5 Adapter Configuration: SCSI Adapter: Adaptec AIC-7880 Ultra SCSI host adapter Ultra Narrow Controller PCI MMAPed I/O Base: 0xfcffe000 Adapter SEEPROM Config: SEEPROM found and used. Adaptec SCSI BIOS: Enabled IRQ: 30 SCBs: Active 0, Max Active 1, Allocated 15, HW 16, Page 255 Interrupts: 33726 BIOS Control Word: 0x18a6 Adapter Control Word: 0x1c5f Extended Translation: Enabled Disconnect Enable Flags: 0x00ff Ultra Enable Flags: 0x0020 Tag Queue Enable Flags: 0x0000 Ordered Queue Tag Flags: 0x0000 Default Tag Queue Depth: 8 Tagged Queue By Device array for aic7xxx host instance 1: {255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255} Actual queue depth per device for aic7xxx host instance 1: {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1} Statistics: (scsi1:0:5:0) Device using Narrow/Sync transfers at 20.0 MByte/sec, offset 15 Transinfo settings: current(12/15/0/0), goal(12/15/0/0), user(12/15/0/0) Total transfers 0 (0 reads and 0 writes) < 2K 2K+ 4K+ 8K+ 16K+ 32K+ 64K+ 128K+ Reads: 0 0 0 0 0 0 0 0 Writes: 0 0 0 0 0 0 0 0 (scsi1:0:6:0) Device using Narrow/Sync transfers at 10.0 MByte/sec, offset 15 Transinfo settings: current(25/15/0/0), goal(12/15/0/0), user(12/15/0/0) Total transfers 132 (0 reads and 132 writes) < 2K 2K+ 4K+ 8K+ 16K+ 32K+ 64K+ 128K+ Reads: 0 0 0 0 0 0 0 0 Writes: 0 0 0 1 131 0 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-dir-scsi
9.2. Add Dependencies to Your Project
9.2. Add Dependencies to Your Project Set up Red Hat JBoss Data Grid by adding dependencies to your project. If you are using Maven or other build systems that support Maven dependencies, add the following to your pom.xml file, located in the Maven repository folder: Note Replace the version value with the appropriate version of the libraries included in JBoss Data Grid. Report a bug
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-embedded</artifactId> <version>USDVERSION</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/add_dependencies_to_your_project