title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 124. KafkaUserQuotas schema reference | Chapter 124. KafkaUserQuotas schema reference Used in: KafkaUserSpec Full list of KafkaUserQuotas schema properties Configure clients to use quotas so that a user does not overload Kafka brokers. Example Kafka user quota configuration spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10 For more information about Kafka user quotas, refer to the Apache Kafka documentation . 124.1. KafkaUserQuotas schema properties Property Property type Description producerByteRate integer A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. consumerByteRate integer A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. requestPercentage integer A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. controllerMutationRate number A quota on the rate at which mutations are accepted for the create topics request, the create partitions request and the delete topics request. The rate is accumulated by the number of partitions created or deleted. | [
"spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkauserquotas-reference |
Chapter 53. Saga | Chapter 53. Saga Only producer is supported The Saga component provides a bridge to execute custom actions within a route using the Saga EIP. The component should be used for advanced tasks, such as deciding to complete or compensate a Saga with completionMode set to MANUAL . Refer to the Saga EIP documentation for help on using sagas in common scenarios. 53.1. URI format 53.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 53.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 53.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 53.3. Component Options The Saga component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 53.4. Endpoint Options The Saga endpoint is configured using URI syntax: with the following path and query parameters: 53.4.1. Path Parameters (1 parameters) Name Description Default Type action (producer) Required Action to execute (complete or compensate). Enum values: COMPLETE COMPENSATE SagaEndpointAction 53.4.2. Query Parameters (1 parameters) Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 53.5. Spring Boot Auto-Configuration When using saga with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saga-starter</artifactId> </dependency> The component supports 3 options, which are listed below. Name Description Default Type camel.component.saga.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.saga.enabled Whether to enable auto configuration of the saga component. This is enabled by default. Boolean camel.component.saga.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"saga:action",
"saga:action",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saga-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-saga-component-starter |
Chapter 3. AlertService | Chapter 3. AlertService 3.1. CountAlerts GET /v1/alertscount CountAlerts counts how many alerts match the get request. 3.1.1. Description 3.1.2. Parameters 3.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.1.3. Return Type V1CountAlertsResponse 3.1.4. Content Type application/json 3.1.5. Responses Table 3.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountAlertsResponse 0 An unexpected error response. GooglerpcStatus 3.1.6. Samples 3.1.7. Common object reference 3.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.1.7.3. V1CountAlertsResponse Field Name Required Nullable Type Description Format count Integer int32 3.2. DeleteAlerts DELETE /v1/alerts 3.2.1. Description 3.2.2. Parameters 3.2.2.1. Query Parameters Name Description Required Default Pattern query.query - null query.pagination.limit - null query.pagination.offset - null query.pagination.sortOption.field - null query.pagination.sortOption.reversed - null query.pagination.sortOption.aggregateBy.aggrFunc - UNSET query.pagination.sortOption.aggregateBy.distinct - null confirm - null 3.2.3. Return Type V1DeleteAlertsResponse 3.2.4. Content Type application/json 3.2.5. Responses Table 3.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeleteAlertsResponse 0 An unexpected error response. GooglerpcStatus 3.2.6. Samples 3.2.7. Common object reference 3.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.2.7.3. V1DeleteAlertsResponse Field Name Required Nullable Type Description Format numDeleted Long int64 dryRun Boolean 3.3. ListAlerts GET /v1/alerts List returns the slim list version of the alerts. 3.3.1. Description 3.3.2. Parameters 3.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.3.3. Return Type V1ListAlertsResponse 3.3.4. Content Type application/json 3.3.5. Responses Table 3.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListAlertsResponse 0 An unexpected error response. GooglerpcStatus 3.3.6. Samples 3.3.7. Common object reference 3.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.3.7.2. ListAlertCommonEntityInfo Fields common to all entities that an alert might belong to. Field Name Required Nullable Type Description Format clusterName String namespace String clusterId String namespaceId String resourceType StorageListAlertResourceType DEPLOYMENT, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, 3.3.7.3. ListAlertPolicyDevFields Field Name Required Nullable Type Description Format SORTName String 3.3.7.4. ListAlertResourceEntity Field Name Required Nullable Type Description Format name String 3.3.7.5. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.3.7.5.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.3.7.6. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 3.3.7.7. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 3.3.7.8. StorageListAlert Field Name Required Nullable Type Description Format id String lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, time Date date-time policy StorageListAlertPolicy state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, enforcementCount Integer int32 enforcementAction StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, commonEntityInfo ListAlertCommonEntityInfo deployment StorageListAlertDeployment resource ListAlertResourceEntity 3.3.7.9. StorageListAlertDeployment Field Name Required Nullable Type Description Format id String name String clusterName String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. This field has moved to CommonEntityInfo namespace String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. This field has moved to CommonEntityInfo clusterId String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. This field has moved to CommonEntityInfo inactive Boolean namespaceId String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. This field has moved to CommonEntityInfo deploymentType String 3.3.7.10. StorageListAlertPolicy Field Name Required Nullable Type Description Format id String name String severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, description String categories List of string developerInternalFields ListAlertPolicyDevFields 3.3.7.11. StorageListAlertResourceType Enum Values DEPLOYMENT SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 3.3.7.12. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.3.7.13. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 3.3.7.14. V1ListAlertsResponse Field Name Required Nullable Type Description Format alerts List of StorageListAlert 3.4. GetAlert GET /v1/alerts/{id} GetAlert returns the alert given its id. 3.4.1. Description 3.4.2. Parameters 3.4.2.1. Path Parameters Name Description Required Default Pattern id X null 3.4.3. Return Type StorageAlert 3.4.4. Content Type application/json 3.4.5. Responses Table 3.4. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAlert 0 An unexpected error response. GooglerpcStatus 3.4.6. Samples 3.4.7. Common object reference 3.4.7.1. AlertDeploymentContainer Field Name Required Nullable Type Description Format image StorageContainerImage name String 3.4.7.2. AlertEnforcement Field Name Required Nullable Type Description Format action StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, message String 3.4.7.3. AlertEntityType Enum Values UNSET DEPLOYMENT CONTAINER_IMAGE RESOURCE 3.4.7.4. AlertProcessViolation Field Name Required Nullable Type Description Format message String processes List of StorageProcessIndicator 3.4.7.5. AlertResourceResourceType Enum Values UNKNOWN SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 3.4.7.6. AlertViolation Field Name Required Nullable Type Description Format message String keyValueAttrs ViolationKeyValueAttrs networkFlowInfo ViolationNetworkFlowInfo type AlertViolationType GENERIC, K8S_EVENT, NETWORK_FLOW, NETWORK_POLICY, time Date Indicates violation time. This field differs from top-level field 'time' which represents last time the alert occurred in case of multiple occurrences of the policy alert. As of 55.0, this field is set only for kubernetes event violations, but may not be limited to it in future. date-time 3.4.7.7. AlertViolationType Enum Values GENERIC K8S_EVENT NETWORK_FLOW NETWORK_POLICY 3.4.7.8. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.4.7.9. KeyValueAttrsKeyValueAttr Field Name Required Nullable Type Description Format key String value String 3.4.7.10. NetworkFlowInfoEntity Field Name Required Nullable Type Description Format name String entityType StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, deploymentNamespace String deploymentType String port Integer int32 3.4.7.11. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 3.4.7.12. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 3.4.7.13. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.4.7.13.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.4.7.14. StorageAlert Field Name Required Nullable Type Description Format id String policy StoragePolicy lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, clusterId String clusterName String namespace String namespaceId String deployment StorageAlertDeployment image StorageContainerImage resource StorageAlertResource violations List of AlertViolation For run-time phase alert, a maximum of 40 violations are retained. processViolation AlertProcessViolation enforcement AlertEnforcement time Date date-time firstOccurred Date date-time resolvedAt Date The time at which the alert was resolved. Only set if ViolationState is RESOLVED. date-time state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, snoozeTill Date date-time platformComponent Boolean entityType AlertEntityType UNSET, DEPLOYMENT, CONTAINER_IMAGE, RESOURCE, 3.4.7.15. StorageAlertDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. labels Map of string clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. containers List of AlertDeploymentContainer annotations Map of string inactive Boolean 3.4.7.16. StorageAlertResource Field Name Required Nullable Type Description Format resourceType AlertResourceResourceType UNKNOWN, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, name String clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. 3.4.7.17. StorageBooleanOperator Enum Values OR AND 3.4.7.18. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 3.4.7.19. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 3.4.7.20. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 3.4.7.21. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 3.4.7.22. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 3.4.7.23. StorageExclusionImage Field Name Required Nullable Type Description Format name String 3.4.7.24. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 3.4.7.25. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 3.4.7.26. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 3.4.7.27. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 3.4.7.28. StoragePolicy Field Name Required Nullable Type Description Format id String name String Name of the policy. Must be unique. description String Free-form text description of this policy. rationale String remediation String Describes how to remediate a violation of this policy. disabled Boolean Toggles whether or not this policy will be executing and actively firing alerts. categories List of string List of categories that this policy falls under. Category names must already exist in Central. lifecycleStages List of StorageLifecycleStage Describes which policy lifecylce stages this policy applies to. Choices are DEPLOY, BUILD, and RUNTIME. eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion Define deployments or images that should be excluded from this policy. scope List of StorageScope Defines clusters, namespaces, and deployments that should be included in this policy. No scopes defined includes everything. severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Lists the enforcement actions to take when a violation from this policy is identified. Possible value are UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, and. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT. notifiers List of string List of IDs of the notifiers that should be triggered when a violation from this policy is identified. IDs should be in the form of a UUID and are found through the Central API. lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection PolicySections define the violation criteria for this policy. mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. source StoragePolicySource IMPERATIVE, DECLARATIVE, 3.4.7.29. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String Defines which field on a deployment or image this PolicyGroup evaluates. See https://docs.openshift.com/acs/operating/manage-security-policies.html#policy-criteria_manage-security-policies for a complete list of possible values. booleanOperator StorageBooleanOperator OR, AND, negate Boolean Determines if the evaluation of this PolicyGroup is negated. Default to false. values List of StoragePolicyValue 3.4.7.30. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup The set of policies groups that make up this section. Each group can be considered an individual criterion. 3.4.7.31. StoragePolicySource Enum Values IMPERATIVE DECLARATIVE 3.4.7.32. StoragePolicyValue Field Name Required Nullable Type Description Format value String 3.4.7.33. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 3.4.7.34. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 3.4.7.35. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 3.4.7.36. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 3.4.7.37. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.4.7.38. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 3.4.7.39. ViolationKeyValueAttrs Field Name Required Nullable Type Description Format attrs List of KeyValueAttrsKeyValueAttr 3.4.7.40. ViolationNetworkFlowInfo Field Name Required Nullable Type Description Format protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, source NetworkFlowInfoEntity destination NetworkFlowInfoEntity 3.5. ResolveAlert PATCH /v1/alerts/{id}/resolve ResolveAlert marks the given alert (by ID) as resolved. 3.5.1. Description 3.5.2. Parameters 3.5.2.1. Path Parameters Name Description Required Default Pattern id X null 3.5.2.2. Body Parameter Name Description Required Default Pattern body AlertServiceResolveAlertBody X 3.5.3. Return Type Object 3.5.4. Content Type application/json 3.5.5. Responses Table 3.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 3.5.6. Samples 3.5.7. Common object reference 3.5.7.1. AlertServiceResolveAlertBody Field Name Required Nullable Type Description Format whitelist Boolean addToBaseline Boolean 3.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.6. SnoozeAlert PATCH /v1/alerts/{id}/snooze SnoozeAlert is deprecated. 3.6.1. Description 3.6.2. Parameters 3.6.2.1. Path Parameters Name Description Required Default Pattern id X null 3.6.2.2. Body Parameter Name Description Required Default Pattern body AlertServiceSnoozeAlertBody X 3.6.3. Return Type Object 3.6.4. Content Type application/json 3.6.5. Responses Table 3.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 3.6.6. Samples 3.6.7. Common object reference 3.6.7.1. AlertServiceSnoozeAlertBody Field Name Required Nullable Type Description Format snoozeTill Date date-time 3.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.6.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.6.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.7. ResolveAlerts PATCH /v1/alerts/resolve ResolveAlertsByQuery marks alerts matching search query as resolved. 3.7.1. Description 3.7.2. Parameters 3.7.2.1. Body Parameter Name Description Required Default Pattern body V1ResolveAlertsRequest X 3.7.3. Return Type Object 3.7.4. Content Type application/json 3.7.5. Responses Table 3.7. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 3.7.6. Samples 3.7.7. Common object reference 3.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.7.7.3. V1ResolveAlertsRequest Field Name Required Nullable Type Description Format query String 3.8. GetAlertsCounts GET /v1/alerts/summary/counts GetAlertsCounts returns the number of alerts in the requested cluster or category. 3.8.1. Description 3.8.2. Parameters 3.8.2.1. Query Parameters Name Description Required Default Pattern request.query - null request.pagination.limit - null request.pagination.offset - null request.pagination.sortOption.field - null request.pagination.sortOption.reversed - null request.pagination.sortOption.aggregateBy.aggrFunc - UNSET request.pagination.sortOption.aggregateBy.distinct - null groupBy - UNSET 3.8.3. Return Type V1GetAlertsCountsResponse 3.8.4. Content Type application/json 3.8.5. Responses Table 3.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertsCountsResponse 0 An unexpected error response. GooglerpcStatus 3.8.6. Samples 3.8.7. Common object reference 3.8.7.1. AlertGroupAlertCounts Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, count String int64 3.8.7.2. GetAlertsCountsResponseAlertGroup Field Name Required Nullable Type Description Format group String counts List of AlertGroupAlertCounts 3.8.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.8.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.8.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.8.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.8.7.6. V1GetAlertsCountsResponse Field Name Required Nullable Type Description Format groups List of GetAlertsCountsResponseAlertGroup 3.9. GetAlertsGroup GET /v1/alerts/summary/groups GetAlertsGroup returns alerts grouped by policy. 3.9.1. Description 3.9.2. Parameters 3.9.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.9.3. Return Type V1GetAlertsGroupResponse 3.9.4. Content Type application/json 3.9.5. Responses Table 3.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertsGroupResponse 0 An unexpected error response. GooglerpcStatus 3.9.6. Samples 3.9.7. Common object reference 3.9.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.9.7.2. ListAlertPolicyDevFields Field Name Required Nullable Type Description Format SORTName String 3.9.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.9.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.9.7.4. StorageListAlertPolicy Field Name Required Nullable Type Description Format id String name String severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, description String categories List of string developerInternalFields ListAlertPolicyDevFields 3.9.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.9.7.6. V1GetAlertsGroupResponse Field Name Required Nullable Type Description Format alertsByPolicies List of V1GetAlertsGroupResponsePolicyGroup 3.9.7.7. V1GetAlertsGroupResponsePolicyGroup Field Name Required Nullable Type Description Format policy StorageListAlertPolicy numAlerts String int64 3.10. GetAlertTimeseries GET /v1/alerts/summary/timeseries GetAlertTimeseries returns the alerts sorted by time. 3.10.1. Description 3.10.2. Parameters 3.10.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.10.3. Return Type V1GetAlertTimeseriesResponse 3.10.4. Content Type application/json 3.10.5. Responses Table 3.10. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertTimeseriesResponse 0 An unexpected error response. GooglerpcStatus 3.10.6. Samples 3.10.7. Common object reference 3.10.7.1. ClusterAlertsAlertEvents Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, events List of V1AlertEvent 3.10.7.2. GetAlertTimeseriesResponseClusterAlerts Field Name Required Nullable Type Description Format cluster String severities List of ClusterAlertsAlertEvents 3.10.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 3.10.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.10.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 3.10.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.10.7.6. V1AlertEvent Field Name Required Nullable Type Description Format time String int64 type V1Type CREATED, REMOVED, id String 3.10.7.7. V1GetAlertTimeseriesResponse Field Name Required Nullable Type Description Format clusters List of GetAlertTimeseriesResponseClusterAlerts 3.10.7.8. V1Type Enum Values CREATED REMOVED | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"A special ListAlert-only enumeration of all resource types. Unlike Alert.Resource.ResourceType this also includes deployment as a type This must be kept in sync with Alert.Resource.ResourceType (excluding the deployment value)",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 24",
"Represents an alert on a kubernetes resource other than a deployment (configmaps, secrets, etc.)",
"Next tag: 12",
"Next tag: 28",
"Next available tag: 13",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/alertservice |
Chapter 4. Hardening Your System with Tools and Services | Chapter 4. Hardening Your System with Tools and Services 4.1. Desktop Security Red Hat Enterprise Linux 7 offers several ways for hardening the desktop against attacks and preventing unauthorized accesses. This section describes recommended practices for user passwords, session and account locking, and safe handling of removable media. 4.1.1. Password Security Passwords are the primary method that Red Hat Enterprise Linux 7 uses to verify a user's identity. This is why password security is so important for protection of the user, the workstation, and the network. For security purposes, the installation program configures the system to use Secure Hash Algorithm 512 ( SHA512 ) and shadow passwords. It is highly recommended that you do not alter these settings. If shadow passwords are deselected during installation, all passwords are stored as a one-way hash in the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd file to his own machine and run any number of password cracking programs against it. If there is an insecure password in the file, it is only a matter of time before the password cracker discovers it. Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow , which is readable only by the root user. This forces a potential attacker to attempt password cracking remotely by logging into a network service on the machine, such as SSH or FTP. This sort of brute-force attack is much slower and leaves an obvious trail as hundreds of failed login attempts are written to system files. Of course, if the cracker starts an attack in the middle of the night on a system with weak passwords, the cracker may have gained access before dawn and edited the log files to cover his tracks. In addition to format and storage considerations is the issue of content. The single most important thing a user can do to protect his account against a password cracking attack is create a strong password. Note Red Hat recommends using a central authentication solution, such as Red Hat Identity Management (IdM). Using a central solution is preferred over using local passwords. For details, see: Introduction to Red Hat Identity Management Defining Password Policies 4.1.1.1. Creating Strong Passwords When creating a secure password, the user must remember that long passwords are stronger than short and complex ones. It is not a good idea to create a password of just eight characters, even if it contains digits, special characters and uppercase letters. Password cracking tools, such as John The Ripper, are optimized for breaking such passwords, which are also hard to remember by a person. In information theory, entropy is the level of uncertainty associated with a random variable and is presented in bits. The higher the entropy value, the more secure the password is. According to NIST SP 800-63-1, passwords that are not present in a dictionary comprised of 50000 commonly selected passwords should have at least 10 bits of entropy. As such, a password that consists of four random words contains around 40 bits of entropy. A long password consisting of multiple words for added security is also called a passphrase , for example: If the system enforces the use of uppercase letters, digits, or special characters, the passphrase that follows the above recommendation can be modified in a simple way, for example by changing the first character to uppercase and appending " 1! ". Note that such a modification does not increase the security of the passphrase significantly. Another way to create a password yourself is using a password generator. The pwmake is a command-line tool for generating random passwords that consist of all four groups of characters - uppercase, lowercase, digits and special characters. The utility allows you to specify the number of entropy bits that are used to generate the password. The entropy is pulled from /dev/urandom . The minimum number of bits you can specify is 56, which is enough for passwords on systems and services where brute force attacks are rare. 64 bits is adequate for applications where the attacker does not have direct access to the password hash file. For situations when the attacker might obtain the direct access to the password hash or the password is used as an encryption key, 80 to 128 bits should be used. If you specify an invalid number of entropy bits, pwmake will use the default of bits. To create a password of 128 bits, enter the following command: While there are different approaches to creating a secure password, always avoid the following bad practices: Using a single dictionary word, a word in a foreign language, an inverted word, or only numbers. Using less than 10 characters for a password or passphrase. Using a sequence of keys from the keyboard layout. Writing down your passwords. Using personal information in a password, such as birth dates, anniversaries, family member names, or pet names. Using the same passphrase or password on multiple machines. While creating secure passwords is imperative, managing them properly is also important, especially for system administrators within larger organizations. The following section details good practices for creating and managing user passwords within an organization. 4.1.1.2. Forcing Strong Passwords If an organization has a large number of users, the system administrators have two basic options available to force the use of strong passwords. They can create passwords for the user, or they can let users create their own passwords while verifying the passwords are of adequate strength. Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down, thus exposing them. For these reasons, most system administrators prefer to have the users create their own passwords, but actively verify that these passwords are strong enough. In some cases, administrators may force users to change their passwords periodically through password aging. When users are asked to create or change passwords, they can use the passwd command-line utility, which is PAM -aware ( Pluggable Authentication Modules ) and checks to see if the password is too short or otherwise easy to crack. This checking is performed by the pam_pwquality.so PAM module. Note In Red Hat Enterprise Linux 7, the pam_pwquality PAM module replaced pam_cracklib , which was used in Red Hat Enterprise Linux 6 as a default module for password quality checking. It uses the same back end as pam_cracklib . The pam_pwquality module is used to check a password's strength against a set of rules. Its procedure consists of two steps: first it checks if the provided password is found in a dictionary. If not, it continues with a number of additional checks. pam_pwquality is stacked alongside other PAM modules in the password component of the /etc/pam.d/passwd file, and the custom set of rules is specified in the /etc/security/pwquality.conf configuration file. For a complete list of these checks, see the pwquality.conf (8) manual page. Example 4.1. Configuring password strength-checking in pwquality.conf To enable using pam_quality , add the following line to the password stack in the /etc/pam.d/passwd file: Options for the checks are specified one per line. For example, to require a password with a minimum length of 8 characters, including all four classes of characters, add the following lines to the /etc/security/pwquality.conf file: To set a password strength-check for character sequences and same consecutive characters, add the following lines to /etc/security/pwquality.conf : In this example, the password entered cannot contain more than 3 characters in a monotonic sequence, such as abcd , and more than 3 identical consecutive characters, such as 1111 . Note As the root user is the one who enforces the rules for password creation, they can set any password for themselves or for a regular user, despite the warning messages. 4.1.1.3. Configuring Password Aging Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a specified period (usually 90 days), the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down. To specify password aging under Red Hat Enterprise Linux 7, make use of the chage command. Important In Red Hat Enterprise Linux 7, shadow passwords are enabled by default. For more information, see the Red Hat Enterprise Linux 7 System Administrator's Guide . The -M option of the chage command specifies the maximum number of days the password is valid. For example, to set a user's password to expire in 90 days, use the following command: chage -M 90 username In the above command, replace username with the name of the user. To disable password expiration, use the value of -1 after the -M option. For more information on the options available with the chage command, see the table below. Table 4.1. chage command line options Option Description -d days Specifies the number of days since January 1, 1970 the password was changed. -E date Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. -I days Specifies the number of inactive days after the password expiration before locking the account. If the value is 0 , the account is not locked after the password expires. -l Lists current account aging settings. -m days Specify the minimum number of days after which the user must change passwords. If the value is 0 , the password does not expire. -M days Specify the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. -W days Specifies the number of days before the password expiration date to warn the user. You can also use the chage command in interactive mode to modify multiple password aging and account details. Use the following command to enter interactive mode: chage <username> The following is a sample interactive session using this command: You can configure a password to expire the first time a user logs in. This forces users to change passwords immediately. Set up an initial password. To assign a default password, enter the following command at a shell prompt as root : passwd username Warning The passwd utility has the option to set a null password. Using a null password, while convenient, is a highly insecure practice, as any third party can log in and access the system using the insecure user name. Avoid using null passwords wherever possible. If it is not possible, always make sure that the user is ready to log in before unlocking an account with a null password. Force immediate password expiration by running the following command as root : chage -d 0 username This command sets the value for the date the password was last changed to the epoch (January 1, 1970). This value forces immediate password expiration no matter what password aging policy, if any, is in place. Upon the initial log in, the user is now prompted for a new password. 4.1.2. Account Locking In Red Hat Enterprise Linux 7, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts. Limiting user login attempts serves mainly as a security measure that aims to prevent possible brute force attacks targeted to obtain a user's account password. With the pam_faillock module, failed login attempts are stored in a separate file for each user in the /var/run/faillock directory. Note The order of lines in the failed attempt log files is important. Any change in this order can lock all user accounts, including the root user account when the even_deny_root option is used. Follow these steps to configure account locking: To lock out any non-root user after three unsuccessful attempts and unlock that user after 10 minutes, add two lines to the auth section of the /etc/pam.d/system-auth and /etc/pam.d/password-auth files. After your edits, the entire auth section in both files should look like this: Lines number 2 and 4 have been added. Add the following line to the account section of both files specified in the step: To apply account locking for the root user as well, add the even_deny_root option to the pam_faillock entries in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files: When the user john attempts to log in for the fourth time after failing to log in three times previously, his account is locked upon the fourth attempt: To prevent the system from locking users out even after multiple failed logins, add the following line just above the line where pam_faillock is called for the first time in both /etc/pam.d/system-auth and /etc/pam.d/password-auth . Also replace user1 , user2 , and user3 with the actual user names. To view the number of failed attempts per user, run, as root , the following command: To unlock a user's account, run, as root , the following command: Important Running cron jobs resets the failure counter of pam_faillock of that user that is running the cron job, and thus pam_faillock should not be configured for cron . See the Knowledge Centered Support (KCS) solution for more information. Keeping Custom Settings with authconfig When modifying authentication configuration using the authconfig utility, the system-auth and password-auth files are overwritten with the settings from the authconfig utility. This can be avoided by creating symbolic links in place of the configuration files, which authconfig recognizes and does not overwrite. In order to use custom settings in the configuration files and authconfig simultaneously, configure account locking using the following steps: Check whether the system-auth and password-auth files are already symbolic links pointing to system-auth-ac and password-auth-ac (this is the system default): If the output is similar to the following, the symbolic links are in place, and you can skip to step number 3: If the system-auth and password-auth files are not symbolic links, continue with the step. Rename the configuration files: Create configuration files with your custom settings: The /etc/pam.d/system-auth-local file should contain the following lines: The /etc/pam.d/password-auth-local file should contain the following lines: Create the following symbolic links: For more information on various pam_faillock configuration options, see the pam_faillock (8) manual page. Removing the nullok option The nullok option, which allows users to log in with a blank password if the password field in the /etc/shadow file is empty, is enabled by default. To disable the nullok option, remove the nullok string from configuration files in the /etc/pam.d/ directory, such as /etc/pam.d/system-auth or /etc/pam.d/password-auth . See the Will nullok option allow users to login without entering a password? KCS solution for more information. 4.1.3. Session Locking Users may need to leave their workstation unattended for a number of reasons during everyday operation. This could present an opportunity for an attacker to physically access the machine, especially in environments with insufficient physical security measures (see Section 1.2.1, "Physical Controls" ). Laptops are especially exposed since their mobility interferes with physical security. You can alleviate these risks by using session locking features which prevent access to the system until a correct password is entered. Note The main advantage of locking the screen instead of logging out is that a lock allows the user's processes (such as file transfers) to continue running. Logging out would stop these processes. 4.1.3.1. Locking Virtual Consoles Using vlock To lock a virtual console, use the vlock utility. Install it by entering the following command as root: After installation, you can lock any console session by using the vlock command without any additional parameters. This locks the currently active virtual console session while still allowing access to the others. To prevent access to all virtual consoles on the workstation, execute the following: In this case, vlock locks the currently active console and the -a option prevents switching to other virtual consoles. See the vlock(1) man page for additional information. 4.1.4. Enforcing Read-Only Mounting of Removable Media To enforce read-only mounting of removable media (such as USB flash disks), the administrator can use a udev rule to detect removable media and configure them to be mounted read-only using the blockdev utility. This is sufficient for enforcing read-only mounting of physical media. Using blockdev to Force Read-Only Mounting of Removable Media To force all removable media to be mounted read-only, create a new udev configuration file named, for example, 80-readonly-removables.rules in the /etc/udev/rules.d/ directory with the following content: SUBSYSTEM=="block",ATTRS{removable}=="1",RUN{program}="/sbin/blockdev --setro %N" The above udev rule ensures that any newly connected removable block (storage) device is automatically configured as read-only using the blockdev utility. Applying New udev Settings For these settings to take effect, the new udev rules need to be applied. The udev service automatically detects changes to its configuration files, but new settings are not applied to already existing devices. Only newly connected devices are affected by the new settings. Therefore, you need to unmount and unplug all connected removable media to ensure that the new settings are applied to them when they are plugged in. To force udev to re-apply all rules to already existing devices, enter the following command as root : Note that forcing udev to re-apply all rules using the above command does not affect any storage devices that are already mounted. To force udev to reload all rules (in case the new rules are not automatically detected for some reason), use the following command: | [
"randomword1 randomword2 randomword3 randomword4",
"pwmake 128",
"password required pam_pwquality.so retry=3",
"minlen = 8 minclass = 4",
"maxsequence = 3 maxrepeat = 3",
"~]# chage juan Changing the aging information for juan Enter the new value, or press ENTER for the default Minimum Password Age [0]: 10 Maximum Password Age [99999]: 90 Last Password Change (YYYY-MM-DD) [2006-08-18]: Password Expiration Warning [7]: Password Inactive [-1]: Account Expiration Date (YYYY-MM-DD) [1969-12-31]:",
"1 auth required pam_env.so 2 auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 3 auth sufficient pam_unix.so nullok try_first_pass 4 auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600 5 auth requisite pam_succeed_if.so uid >= 1000 quiet_success 6 auth required pam_deny.so",
"account required pam_faillock.so",
"auth required pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600 auth sufficient pam_unix.so nullok try_first_pass auth [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=600 account required pam_faillock.so",
"~]USD su - john Account locked due to 3 failed logins su: incorrect password",
"auth [success=1 default=ignore] pam_succeed_if.so user in user1:user2:user3",
"~]USD faillock john: When Type Source Valid 2013-03-05 11:44:14 TTY pts/0 V",
"faillock --user <username> --reset",
"~]# ls -l /etc/pam.d/{password,system}-auth",
"lrwxrwxrwx. 1 root root 16 24. Feb 09.29 /etc/pam.d/password-auth -> password-auth-ac lrwxrwxrwx. 1 root root 28 24. Feb 09.29 /etc/pam.d/system-auth -> system-auth-ac",
"~]# mv /etc/pam.d/system-auth /etc/pam.d/system-auth-ac ~]# mv /etc/pam.d/password-auth /etc/pam.d/password-auth-ac",
"~]# vi /etc/pam.d/system-auth-local",
"auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include system-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include system-auth-ac password include system-auth-ac session include system-auth-ac",
"~]# vi /etc/pam.d/password-auth-local",
"auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include password-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include password-auth-ac password include password-auth-ac session include password-auth-ac",
"~]# ln -sf /etc/pam.d/system-auth-local /etc/pam.d/system-auth ~]# ln -sf /etc/pam.d/password-auth-local /etc/pam.d/password-auth",
"~]# yum install kbd",
"vlock -a",
"SUBSYSTEM==\"block\",ATTRS{removable}==\"1\",RUN{program}=\"/sbin/blockdev --setro %N\"",
"~# udevadm trigger",
"~# udevadm control --reload"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-Hardening_Your_System_with_Tools_and_Services |
E.2.28. /proc/uptime | E.2.28. /proc/uptime This file contains information detailing how long the system has been on since its last restart. The output of /proc/uptime is quite minimal: The first value represents the total number of seconds the system has been up. The second value is the sum of how much time each core has spent idle, in seconds. Consequently, the second value may be greater than the overall system uptime on systems with multiple cores. | [
"350735.47 234388.90"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-uptime |
Chapter 8. Networking | Chapter 8. Networking 8.1. Networking overview OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OpenShift Container Platform networking and its ecosystem. Note You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible. Figure 8.1. OpenShift Virtualization networking overview Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads. You can connect VMs to the default pod network and to any number of secondary networks. The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality. Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins. The default pod network is overlay-based, tunneled through the underlying machine network. The machine network can be defined over a selected set of network interface controllers (NICs). Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks. Note Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS. Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network. 8.1.1. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 8.1.2. Using the default pod network Connecting a virtual machine to the default pod network Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification. Exposing a virtual machine as a service You can expose a VM within the cluster or outside the cluster by creating a Service object. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OpenShift Container Platform web console or the CLI. 8.1.3. Configuring VM secondary network interfaces You can connect a virtual machine to a secondary network by using Linux bridge, SR-IOV and OVN-Kubernetes CNI plugins. You can list multiple secondary networks and interfaces in the VM specification. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a VM to an OVN-Kubernetes secondary network. OpenShift Virtualization supports the layer2 and localnet topologies for OVN-Kubernetes. The localnet topology is the recommended way of exposing VMs to the underlying physical network, with or without VLAN encapsulation. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Note For localnet topology, you must configure an OVS bridge by creating a NodeNetworkConfigurationPolicy object before creating the NAD. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. Connecting a virtual machine to an SR-IOV network You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments. You can connect a VM to an SR-IOV network by performing the following steps: Configure an SR-IOV network device by creating a SriovNetworkNodePolicy CRD. Configure an SR-IOV network by creating an SriovNetwork object. Connect the VM to the SR-IOV network by including the network details in the VM configuration. Connecting a virtual machine to a Linux bridge network Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bonding for your secondary networks. The OVN-Kubernetes localnet topology is the recommended way of connecting a VM to the underlying physical network, but OpenShift Virtualization also supports Linux bridge networks. Note You cannot directly attach to the default machine network when using Linux bridge networks. You can create a Linux bridge network and attach a VM to the network by performing the following steps: Configure a Linux bridge network device by creating a NodeNetworkConfigurationPolicy custom resource definition (CRD). Configure a Linux bridge network by creating a NetworkAttachmentDefinition CRD. Connect the VM to the Linux bridge network by including the network details in the VM configuration. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use the SR-IOV binding. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. Accessing a virtual machine by using the cluster FQDN You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). Configuring and viewing IP addresses You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.1.3.1. Comparing Linux bridge CNI and OVN-Kubernetes localnet topology The following table provides a comparison of features available when using the Linux bridge CNI compared to the localnet topology for an OVN-Kubernetes plugin: Table 8.1. Linux bridge CNI compared to an OVN-Kubernetes localnet topology Feature Available on Linux bridge CNI Available on OVN-Kubernetes localnet Layer 2 access to the underlay native network Only on secondary network interface controllers (NICs) Yes Layer 2 access to underlay VLANs Yes Yes Network policies No Yes Managed IP pools No Yes MAC spoof filtering Yes Yes 8.1.4. Integrating with OpenShift Service Mesh Connecting a virtual machine to a service mesh OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines. 8.1.5. Managing MAC address pools Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. 8.1.6. Configuring SSH access Configuring SSH access to virtual machines You can configure SSH access to VMs by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address. 8.2. Connecting a virtual machine to the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode. Note Traffic passing through network interfaces to the default pod network is interrupted during live migration. 8.2.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. Procedure Edit the interfaces spec of your virtual machine configuration file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 # ... networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 8.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 8.2.3. About jumbo frames support When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes. The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways: libvirt : If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. DHCP: If the guest DHCP client can read the MTU value from the DHCP server response. Note For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value. 8.2.4. Additional resources Changing the MTU for the cluster network Optimizing the MTU for your network 8.3. Exposing a virtual machine by using a service You can expose a virtual machine within the cluster or outside the cluster by creating a Service object. 8.3.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. Additional resources Installing the MetalLB Operator Configuring services to use MetalLB 8.3.2. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 8.3.3. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 8.3.4. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 8.4. Accessing a virtual machine by using its internal FQDN You can access a virtual machine (VM) that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services. A Kubernetes headless service is a form of service that does not allocate a cluster IP address to represent a set of pods. Instead of providing a single virtual IP address for the service, a headless service creates a DNS record for each pod associated with the service. You can expose a VM through its FQDN without having to expose a specific TCP or UDP port. Important If you created a VM by using the OpenShift Container Platform web console, you can find its internal FQDN listed in the Network tile on the Overview tab of the VirtualMachine details page. For more information about connecting to the VM, see Connecting to a virtual machine by using its internal FQDN . 8.4.1. Creating a headless service in a project by using the CLI To create a headless service in a namespace, add the clusterIP: None parameter to the service YAML definition. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create a Service manifest to expose the VM, such as the following example: apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234 1 The name of the service. This must match the spec.subdomain attribute in the VirtualMachine manifest file. 2 This service selector must match the expose:me label in the VirtualMachine manifest file. 3 Specifies a headless service. 4 The list of ports that are exposed by the service. You must define at least one port. This can be any arbitrary value as it does not affect the headless service. Save the Service manifest file. Create the service by running the following command: USD oc create -f headless_service.yaml 8.4.2. Mapping a virtual machine to a headless service by using the CLI To connect to a virtual machine (VM) from within the cluster by using its internal fully qualified domain name (FQDN), you must first map the VM to a headless service. Set the spec.hostname and spec.subdomain parameters in the VM configuration file. If a headless service exists with a name that matches the subdomain, a unique DNS A record is created for the VM in the form of <vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . Procedure Edit the VirtualMachine manifest to add the service selector label and subdomain by running the following command: USD oc edit vm <vm_name> Example VirtualMachine manifest file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: "myvm" 2 subdomain: "mysubdomain" 3 # ... 1 The expose:me label must match the spec.selector attribute of the Service manifest that you previously created. 2 If this attribute is not specified, the resulting DNS A record takes the form of <vm.metadata.name>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . 3 The spec.subdomain attribute must match the metadata.name value of the Service object. Save your changes and exit the editor. Restart the VM to apply the changes. 8.4.3. Connecting to a virtual machine by using its internal FQDN You can connect to a virtual machine (VM) by using its internal fully qualified domain name (FQDN). Prerequisites You have installed the virtctl tool. You have identified the internal FQDN of the VM from the web console or by mapping the VM to a headless service. The internal FQDN has the format <vm.spec.hostname>.<vm.spec.subdomain>.<vm.metadata.namespace>.svc.cluster.local . Procedure Connect to the VM console by entering the following command: USD virtctl console vm-fedora To connect to the VM by using the requested FQDN, run the following command: USD ping myvm.mysubdomain.<namespace>.svc.cluster.local Example output PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms In the preceding example, the DNS entry for myvm.mysubdomain.default.svc.cluster.local points to 10.244.0.57 , which is the cluster IP address that is currently assigned to the VM. 8.4.4. Additional resources Exposing a VM by using a service 8.5. Connecting a virtual machine to a Linux bridge network By default, OpenShift Virtualization is installed with a single, internal pod network. You can create a Linux bridge network and attach a virtual machine (VM) to the network by performing the following steps: Create a Linux bridge node network configuration policy (NNCP) . Create a Linux bridge network attachment definition (NAD) by using the web console or the command line . Configure the VM to recognize the NAD by using the web console or the command line . Note OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . 8.5.1. Creating a Linux bridge NNCP You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 8.5.2. Creating a Linux bridge NAD You can create a Linux bridge network attachment definition (NAD) by using the OpenShift Container Platform web console or command line. 8.5.2.1. Creating a Linux bridge NAD by using the web console You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console. A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Procedure In the web console, click Networking NetworkAttachmentDefinitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Select CNV Linux bridge from the Network Type list. Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . 8.5.2.2. Creating a Linux bridge NAD by using the command line You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line. The NAD and the VM must be in the same namespace. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Add the VM to the NetworkAttachmentDefinition configuration, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { "cniVersion": "0.3.1", "name": "bridge-network", 3 "type": "bridge", 4 "bridge": "br1", 5 "macspoofchk": false, 6 "vlan": 100, 7 "disableContainerInterface": true, "preserveDefaultVlan": false 8 } 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection for the bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the defined bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. The name should match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest. 6 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 8 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f network-attachment-definition.yaml 1 1 Where network-attachment-definition.yaml is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition bridge-network 8.5.3. Configuring a VM network interface You can configure a virtual machine (VM) network interface by using the OpenShift Container Platform web console or command line. 8.5.3.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.5.3.2. Configuring a VM network interface by using the command line You can configure a virtual machine (VM) network interface for a bridge network by using the command line. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Add the bridge interface and the network attachment definition to the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 # ... networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition. Apply the configuration: USD oc apply -f example-vm.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.6. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configuring an SR-IOV network device Configuring an SR-IOV network Connecting the VM to the SR-IOV network 8.6.1. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. Reboot only happens in the following cases: With Mellanox NICs ( mlx5 driver) a node reboot happens every time the number of virtual functions (VFs) increase on a physical function (PF). With Intel NICs, a reboot only happens if the kernel parameters do not include intel_iommu=on and iommu=pt . It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 8.6.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 8.6.3. Connecting a virtual machine to an SR-IOV network by using the command line You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Add the SR-IOV network details to the spec.domain.devices.interfaces and spec.networks stanzas of the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3 # ... 1 Specify a unique name for the SR-IOV interface. 2 Specify the name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 3 Specify the name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm_sriov>.yaml 1 1 The name of the virtual machine YAML file. 8.6.4. Connecting a VM to an SR-IOV network by using the web console You can connect a VM to the SR-IOV network by including the network details in the VM configuration. Prerequisites You must create a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name. Select an SR-IOV network attachment definition from the Network list. Select SR-IOV from the Type list. Optional: Add a network Model or Mac address . Click Save . Restart or live-migrate the VM to apply the changes. 8.6.5. Additional resources Configuring DPDK workloads for improved performance 8.7. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and virtual machines (VMs) to run DPDK workloads over SR-IOV networks. 8.7.1. Configuring a cluster for DPDK workloads You can configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have installed the Node Tuning Operator. Procedure Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS). If your OpenShift Container Platform cluster uses separate control plane and compute nodes for high-availability: Label a subset of the compute nodes with a custom role; for example, worker-dpdk : USD oc label node <node_name> node-role.kubernetes.io/worker-dpdk="" Create a new MachineConfigPool manifest that contains the worker-dpdk label in the spec.machineConfigSelector object: Example MachineConfigPool manifest apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: "" Create a PerformanceProfile manifest that applies to the labeled nodes and the machine config pool that you created in the steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping. Example PerformanceProfile manifest apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: "" numa: topologyPolicy: single-numa-node Note The compute nodes automatically restart after you apply the MachineConfigPool and PerformanceProfile manifests. Retrieve the name of the generated RuntimeClass resource from the status.runtimeClass field of the PerformanceProfile object: USD oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}' Set the previously obtained RuntimeClass name as the default container runtime class for the virt-launcher pods by editing the HyperConverged custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]' Note Editing the HyperConverged CR changes a global setting that affects all VMs that are created after the change is applied. If your DPDK-enabled compute nodes use Simultaneous multithreading (SMT), enable the AlignCPUs enabler by editing the HyperConverged CR: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]' Note Enabling AlignCPUs allows OpenShift Virtualization to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using emulator thread isolation. Create an SriovNetworkNodePolicy object with the spec.deviceType field set to vfio-pci : Example SriovNetworkNodePolicy manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: "8086" deviceID: "1572" pfNames: - eno3 rootDevices: - "0000:19:00.2" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Additional resources Using CPU Manager and Topology Manager Configuring huge pages Creating a custom machine config pool 8.7.1.1. Removing a custom machine config pool for high-availability clusters You can delete a custom machine config pool that you previously created for your high-availability cluster. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a custom machine config pool by labeling a subset of the compute nodes with a custom role and creating a MachineConfigPool manifest with that label. Procedure Remove the worker-dpdk label from the compute nodes by running the following command: USD oc label node <node_name> node-role.kubernetes.io/worker-dpdk- Delete the MachineConfigPool manifest that contains the worker-dpdk label by entering the following command: USD oc delete mcp worker-dpdk 8.7.2. Configuring a project for DPDK workloads You can configure the project to run DPDK workloads on SR-IOV hardware. Prerequisites Your cluster is configured to run DPDK workloads. Procedure Create a namespace for your DPDK applications: USD oc create ns dpdk-checkup-ns Create an SriovNetwork object that references the SriovNetworkNodePolicy object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Example SriovNetwork manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: "off" trust: "on" vlan: 1019 1 The namespace where the NetworkAttachmentDefinition object is deployed. 2 The value of the spec.resourceName attribute of the SriovNetworkNodePolicy object that was created when configuring the cluster for DPDK workloads. Optional: Run the virtual machine latency checkup to verify that the network is properly configured. Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads. Additional resources Working with projects Virtual machine latency checkup DPDK checkup 8.7.3. Configuring a virtual machine for DPDK workloads You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing. Prerequisites Your cluster is configured to run DPDK workloads. You have created and configured the project in which the VM will run. Procedure Edit the VirtualMachine manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages: Example VirtualMachine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east # ... 1 This annotation specifies that load balancing is disabled for CPUs that are used by the container. 2 This annotation specifies that the CPU quota is disabled for CPUs that are used by the container. 3 This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container. 4 The number of sockets inside the VM. This field must be set to 1 for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node. 5 The number of cores inside the VM. This must be a value greater than or equal to 1 . In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs. 6 The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi. 7 The name of the SR-IOV NetworkAttachmentDefinition object. Save and exit the editor. Apply the VirtualMachine manifest: USD oc apply -f <file_name>.yaml Configure the guest operating system. The following example shows the configuration steps for RHEL 9 operating system: Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified. USD grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8" To achieve low-latency tuning by using the cpu-partitioning profile in the TuneD application, run the following commands: USD dnf install -y tuned-profiles-cpu-partitioning USD echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application. USD tuned-adm profile cpu-partitioning Override the SR-IOV NIC driver by using the driverctl device driver control utility: USD dnf install -y driverctl USD driverctl set-override 0000:07:00.0 vfio-pci Restart the VM to apply the changes. 8.8. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a virtual machine (VM) to an OVN-Kubernetes secondary network. OpenShift Virtualization supports the layer2 and localnet topologies for OVN-Kubernetes. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes. Note An OVN-Kubernetes secondary network is compatible with the multi-network policy API which provides the MultiNetworkPolicy custom resource definition (CRD) to control traffic flow to and from VMs. You can use the ipBlock attribute to define network policy ingress and egress rules for specific CIDR blocks. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Note For localnet topology, you must configure an OVS bridge by creating a NodeNetworkConfigurationPolicy object before creating the NAD. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. 8.8.1. Creating an OVN-Kubernetes NAD You can create an OVN-Kubernetes network attachment definition (NAD) by using the OpenShift Container Platform web console or the CLI. Note Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported. 8.8.1.1. Creating a NAD for layer 2 topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { "cniVersion": "0.3.1", 1 "name": "my-namespace-l2-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology":"layer2", 4 "mtu": 1300, 5 "netAttachDefName": "my-namespace/l2-network" 6 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute is not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is layer2 . 5 Optional: The maximum transmission unit (MTU) value. The default value is automatically set by the kernel. 6 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Note The above example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address. Apply the manifest: USD oc apply -f <filename>.yaml 8.8.1.2. Creating a NAD for localnet topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the underlying physical network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). You have installed the Kubernetes NMState Operator. Procedure Create a NodeNetworkConfigurationPolicy object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5 1 The name of the configuration object. 2 Specifies the nodes to which the node network configuration policy is to be applied. The recommended node selector value is node-role.kubernetes.io/worker: '' . 3 The name of the additional network from which traffic is forwarded to the OVS bridge. This attribute must match the value of the spec.config.name field of the NetworkAttachmentDefinition object that defines the OVN-Kubernetes additional network. 4 The name of the OVS bridge on the node. This value is required if the state attribute is present . 5 The state of the mapping. Must be either present to add the mapping or absent to remove the mapping. The default value is present . Note OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { "cniVersion": "0.3.1", 1 "name": "localnet-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology": "localnet", 4 "netAttachDefName": "default/localnet-network" 5 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute must match the value of the spec.desiredState.ovn.bridge-mappings.localnet field of the NodeNetworkConfigurationPolicy object that defines the OVS bridge mapping. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is localnet . 5 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Apply the manifest: USD oc apply -f <filename>.yaml 8.8.1.3. Creating a NAD for layer 2 topology by using the web console You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. Procedure Go to Networking NetworkAttachmentDefinitions in the web console. Click Create Network Attachment Definition . The network attachment definition must be in the same namespace as the pod or virtual machine using it. Enter a unique Name and optional Description . Select OVN Kubernetes L2 overlay network from the Network Type list. Click Create . 8.8.1.4. Creating a NAD for localnet topology using the web console You can create a network attachment definition (NAD) to connect workloads to a physical network by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with cluster-admin privileges. Use nmstate to configure the localnet to OVS bridge mappings. Procedure Navigate to Networking NetworkAttachmentDefinitions in the web console. Click Create Network Attachment Definition . The network attachment definition must be in the same namespace as the pod or virtual machine using it. Enter a unique Name and optional Description . Select OVN Kubernetes secondary localnet network from the Network Type list. Enter the name of your pre-configured localnet identifier in the Bridge mapping field. Optional: You can explicitly set MTU to the specified value. The default value is chosen by the kernel. Optional: Encapsulate the traffic in a VLAN. The default value is none. Click Create . 8.8.2. Attaching a virtual machine to the OVN-Kubernetes secondary network You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the OpenShift Container Platform web console or the CLI. 8.8.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the OVN-Kubernetes secondary network interface details, as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4 # ... 1 The name of the OVN-Kubernetes secondary interface. 2 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. 3 The name of the NetworkAttachmentDefinition object. 4 Specifies the nodes on which the VM can be scheduled. The recommended node selector value is node-role.kubernetes.io/worker: '' . Apply the VirtualMachine manifest: USD oc apply -f <filename>.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.8.3. Additional resources Creating secondary networks on OVN-Kubernetes About the Kubernetes NMState Operator Creating primary networks using a NetworkAttachmentDefinition 8.9. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OpenShift Virtualization also supports hot plugging secondary interfaces that use SR-IOV binding. Note Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. 8.9.1. VirtIO limitations Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces. Note The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation . If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces. 8.9.2. Hot plugging a secondary network interface by using the CLI Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. Prerequisites A network attachment definition is configured in the same namespace as your VM. You have installed the virtctl tool. You have installed the OpenShift CLI ( oc ). Procedure If the VM to which you want to hot plug the network interface is not running, start it by using the following command: USD virtctl start <vm_name> -n <namespace> Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3 # ... 1 Specifies the name of the new network interface. 2 Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. 3 Specifies the name of the NetworkAttachmentDefinition object. To attach the network interface to the running VM, live migrate the VM by running the following command: USD virtctl migrate <vm_name> Verification Verify that the VM live migration is successful by using the following command: USD oc get VirtualMachineInstanceMigration -w Example output NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora Verify that the new interface is added to the VM by checking the VMI status: USD oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" Example output [ { "infoSource": "domain, guest-agent", "interfaceName": "eth0", "ipAddress": "10.130.0.195", "ipAddresses": [ "10.130.0.195", "fd02:0:0:3::43c" ], "mac": "52:54:00:0e:ab:25", "name": "default", "queueCount": 1 }, { "infoSource": "domain, guest-agent, multus-status", "interfaceName": "eth1", "mac": "02:d8:b8:00:00:2a", "name": "bridge-interface", 1 "queueCount": 1 } ] 1 The hot plugged interface appears in the VMI status. 8.9.3. Hot unplugging a secondary network interface by using the CLI You can remove a secondary network interface from a running virtual machine (VM). Note Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. Prerequisites Your VM must be running. The VM must be created on a cluster running OpenShift Virtualization 4.14 or later. The VM must have a bridge network interface attached. Procedure Edit the VM specification to hot unplug a secondary network interface. Setting the interface state to absent detaches the network interface from the guest, but the interface still exists in the pod. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name> # ... 1 Set the interface state to absent to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface. Remove the interface from the pod by migrating the VM: USD virtctl migrate <vm_name> 8.9.4. Additional resources Installing virtctl Creating a Linux bridge network attachment definition Connecting a virtual machine to a Linux bridge network Creating an SR-IOV network attachment definition Connecting a virtual machine to an SR-IOV network 8.10. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 8.10.1. Adding a virtual machine to a service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Important To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Prerequisites You installed the Service Mesh Operators. You created the Service Mesh control plane. You added the VM project to the Service Mesh member roll. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation: Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 8.10.2. Additional resources Installing the Service Mesh Operators Creating the Service Mesh control plane Adding projects to the Service Mesh member roll 8.11. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 8.11.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 8.11.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 8.11.3. Additional resources Configuring live migration limits and timeouts 8.12. Configuring and viewing IP addresses You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.12.1. Configuring IP addresses for virtual machines You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line. You can configure a dynamic IP address when you create a VM by using the command line. The IP address is provisioned with cloud-init. 8.12.1.1. Configuring an IP address when creating a virtual machine by using the command line You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. Note If the VM is connected to the pod network, the pod network interface is the default route unless you update it. Prerequisites The virtual machine is connected to a secondary network. You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine. Procedure Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration: To configure a dynamic IP address, specify the interface name and enable DHCP: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 1 Specify the interface name. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 Specify the interface name. 2 Specify the static IP address. 8.12.2. Viewing IP addresses of virtual machines You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.12.2.1. Viewing the IP address of a virtual machine by using the web console You can view the IP address of a virtual machine (VM) by using the OpenShift Container Platform web console. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a VM to open the VirtualMachine details page. Click the Details tab to view the IP address. 8.12.2.2. Viewing the IP address of a virtual machine by using the command line You can view the IP address of a virtual machine (VM) by using the command line. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure Obtain the virtual machine instance configuration by running the following command: USD oc describe vmi <vmi_name> Example output # ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 8.12.3. Additional resources Installing the QEMU guest agent 8.13. Accessing a virtual machine by using its external FQDN You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). Important Accessing a VM from outside the cluster by using its FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.13.1. Configuring a DNS server for secondary networks The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the deployKubeSecondaryDNS feature gate in the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You configured a load balancer for the cluster. You logged in to the cluster with cluster-admin permissions. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Enable the DNS server and monitoring components according to the following example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1 # ... 1 Enables the DNS server Save the file and exit the editor. Create a load balancer service to expose the DNS server outside the cluster by running the oc expose command according to the following example: USD oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP' Retrieve the external IP address by running the following command: USD oc get service -n openshift-cnv Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s Edit the HyperConverged CR again: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the external IP address that you previously retrieved to the kubeSecondaryDNSNameServerIP field in the enterprise DNS server records. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: "10.46.41.94" 1 # ... 1 Specify the external IP address exposed by the load balancer service. Save the file and exit the editor. Retrieve the cluster FQDN by running the following command: USD oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}' Example output openshift.example.com Point to the DNS server. To do so, add the kubeSecondaryDNSNameServerIP value and the cluster FQDN to the enterprise DNS server records. For example: vm.<FQDN>. IN NS ns.vm.<FQDN>. ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP> 8.13.2. Connecting to a VM on a secondary network by using the cluster FQDN You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster. Prerequisites You installed the QEMU guest agent on the VM. The IP address of the VM is public. You configured the DNS server for secondary networks. You retrieved the fully qualified domain name (FQDN) of the cluster. To obtain the FQDN, use the oc get command as follows: USD oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain Procedure Retrieve the network interface name from the VM configuration by running the following command: USD oc get vm -n <namespace> <vm_name> -o yaml Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic # ... networks: - multus: networkName: bridge-conf name: example-nic 1 1 Note the name of the network interface. Connect to the VM by using the ssh command: USD ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn> 8.13.3. Additional resources Configuring ingress cluster traffic using a load balancer About MetalLB and the MetalLB Operator Configuring IP addresses for virtual machines 8.14. Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. 8.14.1. Managing KubeMacPool by using the command line You can disable and re-enable KubeMacPool by using the command line. KubeMacPool is enabled by default. Procedure To disable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore To re-enable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io- | [
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4",
"oc create -f example-vm-ipv6.yaml",
"oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1",
"apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000",
"oc create -f example-service.yaml",
"oc get service -n example-namespace",
"apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234",
"oc create -f headless_service.yaml",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: \"myvm\" 2 subdomain: \"mysubdomain\" 3",
"virtctl console vm-fedora",
"ping myvm.mysubdomain.<namespace>.svc.cluster.local",
"PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 8 }",
"oc create -f network-attachment-definition.yaml 1",
"oc get network-attachment-definition bridge-network",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3",
"oc apply -f example-vm.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3",
"oc apply -f <vm_sriov>.yaml 1",
"oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node",
"oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/featureGates/alignCPUs\", \"value\": true}]'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"",
"oc label node <node_name> node-role.kubernetes.io/worker-dpdk-",
"oc delete mcp worker-dpdk",
"oc create ns dpdk-checkup-ns",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east",
"oc apply -f <file_name>.yaml",
"grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"",
"dnf install -y tuned-profiles-cpu-partitioning",
"echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf",
"tuned-adm profile cpu-partitioning",
"dnf install -y driverctl",
"driverctl set-override 0000:07:00.0 vfio-pci",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }",
"oc apply -f <filename>.yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"localnet-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\": \"localnet\", 4 \"netAttachDefName\": \"default/localnet-network\" 5 }",
"oc apply -f <filename>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4",
"oc apply -f <filename>.yaml",
"virtctl start <vm_name> -n <namespace>",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3",
"virtctl migrate <vm_name>",
"oc get VirtualMachineInstanceMigration -w",
"NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora",
"oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"",
"[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>",
"virtctl migrate <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk",
"oc apply -f <vm_name>.yaml 1",
"apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP",
"oc create -f <service_name>.yaml 1",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true",
"kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1",
"oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'",
"oc get service -n openshift-cnv",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1",
"oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'",
"openshift.example.com",
"vm.<FQDN>. IN NS ns.vm.<FQDN>.",
"ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>",
"oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain",
"oc get vm -n <namespace> <vm_name> -o yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1",
"ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/networking |
Chapter 9. Monitoring Hosts Using Red Hat Insights | Chapter 9. Monitoring Hosts Using Red Hat Insights In this chapter, you can find information about creating host monitoring reports and monitoring your hosts using Red Hat Insights and creating an Insights plan. 9.1. Using Red Hat Insights with Hosts in Satellite You can use Red Hat Insights to diagnose systems and downtime related to security exploits, performance degradation and stability failures. You can use the dashboard to quickly identify key risks to stability, security, and performance. You can sort by category, view details of the impact and resolution, and then determine what systems are affected. To use Red Hat Insights to monitor hosts that you manage with Satellite, you must first install Red Hat Insights on your hosts and register your hosts with Red Hat Insights. For new Satellite hosts, you can install and configure Satellite hosts with Insights during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in the Managing Hosts guide. To install and register your host using Puppet, or manually, see Red Hat Insights Getting Started . Red Hat Insights Information Available for Hosts Additional information is available about hosts through Red Hat Insights. You can find this information in two places: In the Satellite web UI, navigate to Configure > Insights where the vertical ellipsis to the Remediate button provides a View in Red Hat Insights link to the general recommendations page. On each recommendation line, the vertical ellipsis provide a View in Red Hat Insights link to the recommendation rule, and, if one is available for that recommendation, a Knowledgebase article link. For additional information, navigate to Hosts > All hosts . If the host has recommendations listed, click on the number of recommendations. On the Insights tab, the vertical ellipsis to the Remediate button provide a Go To Satellite Insights page link to information for the system, and a View in Red Hat Insights link to host details on the console. Excluding hosts from rh-cloud and insights-client reports You can set the host_registration_insights parameter to False to omit rh-cloud and insight-client reports. Satellite will exclude the hosts from rh-cloud reports and block insight-client from uploading a report to the cloud. You can also set this parameter at the organization, hostgroup, subnet, and domain level. It automatically prevents new reports from being uploaded as long as they are associated with the entity. If you set the parameter to false on a host that is already reported on the Red Hat Hybrid Cloud , it will be still removed automatically from the inventory. However, this process can take some time to complete. Deploying Red Hat Insights using the Ansible Role You can automate the installation and registration of hosts with Red Hat Insights using the RedHatInsights.insights-client Ansible role. For more information about adding this role to your Satellite, see Getting Started with Ansible in Satellite in Configuring Satellite to use Ansible . Add the RedHatInsights.insights-client role to the hosts. For new hosts, see Section 2.1, "Creating a Host in Red Hat Satellite" . For existing hosts, see Using Ansible Roles to Automate Repetitive Tasks on Satellite Hosts in Configuring Satellite to use Ansible . To run the RedHatInsights.insights-client role on your host, navigate to Hosts > All Hosts and click the name of the host that you want to use. Click the Run Ansible roles button. You must set up the API token for Insights before continuing. For further information, see Red Hat API Tokens . You can manually synchronize the recommendations using the following procedure: In the Satellite web UI, navigate to Configure > Insights . Click the Start Recommendations Sync button. If you have not set up the API token, you are prompted to create one before using this page. Additional Information To view the logs for Red Hat Insights and all plug-ins, go to /var/log/foreman/production.log . If you have problems connecting to Red Hat Insights, ensure that your certificates are up-to-date. Refresh your subscription manifest to update your certificates. You can change the default schedule for running insights-client by configuring insights-client.timer on a host. For more information, see Changing the insights-client schedule in the Client Configuration Guide for Red Hat Insights . 9.2. Creating an Insights Plan for Hosts With Satellite, you can create a Red Hat Insights remediation plan and run the plan on Satellite hosts. Procedure In the Satellite web UI, navigate to Configure > Insights . On the Red Hat Insights page, select the number of recommendations that you want to include in an Insights plan. You can only select the recommendations that have an associated playbook. Click Remediate . In the Remediation Summary window, you can select the Resolutions to apply. Use the Filter field to search for specific keywords. Click Remediate . In the Job Invocation page, do not change the contents of precompleted fields. Optional. For more advanced configuration of the Remote Execution Job, click Show Advanced Fields . Select the Type of query you require. Select the Schedule you require. Click Submit . Alternatively: In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Host details page, click Recommendations . On the Red Hat Insights page, select the number of recommendations you want to include in an Insights plan and proceed as before. In the Jobs window, you can view the progress of your plan. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/Monitoring_Hosts_Using_Red_Hat_Insights_managing-hosts |
Chapter 1. Sample projects and business assets in Business Central | Chapter 1. Sample projects and business assets in Business Central Business Central contains sample projects with business assets that you can use as a reference for the rules, processes, or other assets that you create in your own Red Hat Process Automation Manager projects. Each sample project is designed differently to demonstrate process automation, decision management, or business optimization assets and logic in Red Hat Process Automation Manager. Note Red Hat does not provide support for the sample code included in the Red Hat Process Automation Manager distribution. The following sample projects are available in Business Central: Course_Scheduling : (Business optimization) Course scheduling and curriculum decision process. Assigns lectures to rooms and determines a student's curriculum based on factors such as course conflicts and class room capacity. Dinner_Party : (Business optimization) Guest seating optimization using guided decision tables. Assigns guest seating based on each guest's job type, political beliefs, and known relationships. Employee_Rostering : (Business optimization) Employee rostering optimization using decision and solver assets. Assigns employees to shifts based on skills. Evaluation_Process : (Process automation) Evaluation process using business process assets. Evaluates employees based on performance. IT_Orders : (Process automation and case management) Ordering case using business process and case management assets. Places an IT hardware order based on needs and approvals. Mortgages : (Decision management with rules) Loan approval process using rule-based decision assets. Determines loan eligibility based on applicant data and qualifications. Mortgage_Process : (Process automation) Loan approval process using business process and decision assets. Determines loan eligibility based on applicant data and qualifications. OptaCloud : (Business optimization) Resource allocation optimization using decision and solver assets. Assigns processes to computers with limited resources. Traffic_Violation : (Decision management with DMN) Traffic violation decision service using a Decision Model and Notation (DMN) model. Determines driver penalty and suspension based on traffic violations. 1.1. Accessing sample projects and business assets in Business Central You can use the sample projects in Business Central to explore business assets as a reference for the rules or other assets that you create in your own Red Hat Process Automation Manager projects. Prerequisites Business Central is installed and running. For installation options, see Planning a Red Hat Process Automation Manager installation . Procedure In Business Central, go to Menu Design Projects . If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples . Review the descriptions for each sample project to determine which project you want to explore. Each sample project is designed differently to demonstrate process automation, decision management, or business optimization assets and logic in Red Hat Process Automation Manager. Select one or more sample projects and click Ok to add the projects to your space. In the Projects page of your space, select one of the sample projects to view the assets for that project. Select each asset to explore how the project is designed to achieve the specified goal or workflow. Some of the sample projects contain more than one page of assets. Click the left or right arrows in the upper-right corner to view the full asset list. Figure 1.1. Asset page selection In the upper-right corner of the project Assets page, click Build to build the sample project or Deploy to build the project and then deploy it to KIE Server. Note You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a corresponding project in Business Central, go to project Settings General Settings Version , toggle the Development Mode option, and click Save . By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode. To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the Menu Deploy Execution Servers page. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/decision-examples-central-con_getting-started-decision-services |
Chapter 32. Performing cluster maintenance | Chapter 32. Performing cluster maintenance In order to perform maintenance on the nodes of your cluster, you may need to stop or move the resources and services running on that cluster. Or you may need to stop the cluster software while leaving the services untouched. Pacemaker provides a variety of methods for performing system maintenance. If you need to stop a node in a cluster while continuing to provide the services running on that cluster on another node, you can put the cluster node in standby mode. A node that is in standby mode is no longer able to host resources. Any resource currently active on the node will be moved to another node, or stopped if no other node is eligible to run the resource. For information about standby mode, see Putting a node into standby mode . If you need to move an individual resource off the node on which it is currently running without stopping that resource, you can use the pcs resource move command to move the resource to a different node. When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. When you are ready to move the resource back, you can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node, however, since where the resources can run at that point depends on how you have configured your resources initially. You can relocate a resource to its preferred node with the pcs resource relocate run command. If you need to stop a running resource entirely and prevent the cluster from starting it again, you can use the pcs resource disable command. For information on the pcs resource disable command, see Disabling, enabling, and banning cluster resources . If you want to prevent Pacemaker from taking any action for a resource (for example, if you want to disable recovery actions while performing maintenance on the resource, or if you need to reload the /etc/sysconfig/pacemaker settings), use the pcs resource unmanage command, as described in Setting a resource to unmanaged mode . Pacemaker Remote connection resources should never be unmanaged. If you need to put the cluster in a state where no services will be started or stopped, you can set the maintenance-mode cluster property. Putting the cluster into maintenance mode automatically unmanages all resources. For information about putting the cluster in maintenance mode, see Putting a cluster in maintenance mode . If you need to update the packages that make up the RHEL High Availability and Resilient Storage Add-Ons, you can update the packages on one node at a time or on the entire cluster as a whole, as summarized in Updating a RHEL high availability cluster . If you need to perform maintenance on a Pacemaker remote node, you can remove that node from the cluster by disabling the remote node resource, as described in Upgrading remote nodes and guest nodes . If you need to migrate a VM in a RHEL cluster, you will first need to stop the cluster services on the VM to remove the node from the cluster and then start the cluster back up after performing the migration. as described in Migrating VMs in a RHEL cluster . 32.1. Putting a node into standby mode When a cluster node is in standby mode, the node is no longer able to host resources. Any resources currently active on the node will be moved to another node. The following command puts the specified node into standby mode. If you specify the --all , this command puts all nodes into standby mode. You can use this command when updating a resource's packages. You can also use this command when testing a configuration, to simulate recovery without actually shutting down a node. The following command removes the specified node from standby mode. After running this command, the specified node is then able to host resources. If you specify the --all , this command removes all nodes from standby mode. Note that when you execute the pcs node standby command, this prevents resources from running on the indicated node. When you execute the pcs node unstandby command, this allows resources to run on the indicated node. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. 32.2. Manually moving cluster resources You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When individually specified resources needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running. You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. 32.2.1. Moving a resource from its current node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the resource as defined. Specify the destination_node if you want to indicate on which node to run the resource that you are moving. When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. By default, the location constraint that the command creates is automatically removed once the resource has been moved. If removing the constraint would cause the resource to move back to the original node, as might happen if the resource-stickiness value for the resource is 0, the pcs resource move command fails. If you would like to move a resource and leave the resulting constraint in place, use the pcs resource move-with-constraint command. If you specify the --promoted parameter of the pcs resource move command, the constraint applies only to promoted instances of the resource. If you specify the --strict parameter of the pcs resource move command, the command will fail if other resources than the one specified in the command would be affected. You can optionally configure a --wait[= n ] parameter for the pcs resource move command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, it defaults to a value of 60 minutes. 32.2.2. Moving a resource to its preferred node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can enter the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, enter the pcs resource relocate show command. 32.3. Disabling, enabling, and banning cluster resources In addition to the pcs resource move and pcs resource relocate commands, there are a variety of other commands you can use to control the behavior of cluster resources. Disabling a cluster resource You can manually stop a running resource and prevent the cluster from starting it again with the following command. Depending on the rest of the configuration (constraints, options, failures, and so on), the resource may remain started. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to stop and then return 0 if the resource is stopped or 1 if the resource has not stopped. If 'n' is not specified it defaults to 60 minutes. You can specify that a resource be disabled only if disabling the resource would not have an effect on other resources. Ensuring that this would be the case can be impossible to do by hand when complex resource relations are set up. The pcs resource disable --simulate command shows the effects of disabling a resource while not changing the cluster configuration. The pcs resource disable --safe command disables a resource only if no other resources would be affected in any way, such as being migrated from one node to another. The pcs resource safe-disable command is an alias for the pcs resource disable --safe command. The pcs resource disable --safe --no-strict command disables a resource only if no other resources would be stopped or demoted You can specify the --brief option for the pcs resource disable --safe command to print errors only. The error report that the pcs resource disable --safe command generates if the safe disable operation fails contains the affected resource IDs. If you need to know only the resource IDs of resources that would be affected by disabling a resource, use the --brief option, which does not provide the full simulation result. Enabling a cluster resource Use the following command to allow the cluster to start a resource. Depending on the rest of the configuration, the resource may remain stopped. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started or 1 if the resource has not started. If 'n' is not specified it defaults to 60 minutes. Preventing a resource from running on a particular node Use the following command to prevent a resource from running on a specified node, or on the current node if no node is specified. Note that when you execute the pcs resource ban command, this adds a -INFINITY location constraint to the resource to prevent it from running on the indicated node. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --promoted parameter of the pcs resource ban command, the scope of the constraint is limited to the promoted role and you must specify promotable_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource ban command to indicate a period of time the constraint should remain. You can optionally configure a --wait[= n ] parameter for the pcs resource ban command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. Forcing a resource to start on the current node Use the debug-start parameter of the pcs resource command to force a specified resource to start on the current node, ignoring the cluster recommendations and printing the output from starting the resource. This is mainly used for debugging resources; starting resources on a cluster is (almost) always done by Pacemaker and not directly with a pcs command. If your resource is not starting, it is usually due to either a misconfiguration of the resource (which you debug in the system log), constraints that prevent the resource from starting, or the resource being disabled. You can use this command to test resource configuration, but it should not normally be used to start resources in a cluster. The format of the debug-start command is as follows. 32.4. Setting a resource to unmanaged mode When a resource is in unmanaged mode, the resource is still in the configuration but Pacemaker does not manage the resource. The following command sets the indicated resources to unmanaged mode. The following command sets resources to managed mode, which is the default state. You can specify the name of a resource group with the pcs resource manage or pcs resource unmanage command. The command will act on all of the resources in the group, so that you can set all of the resources in a group to managed or unmanaged mode with a single command and then manage the contained resources individually. 32.5. Putting a cluster in maintenance mode When a cluster is in maintenance mode, the cluster does not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it. To put a cluster in maintenance mode, use the following command to set the maintenance-mode cluster property to true . To remove a cluster from maintenance mode, use the following command to set the maintenance-mode cluster property to false . You can remove a cluster property from the configuration with the following command. Alternately, you can remove a cluster property from a configuration by leaving the value field of the pcs property set command blank. This restores that property to its default value. For example, if you have previously set the symmetric-cluster property to false , the following command removes the value you have set from the configuration and restores the value of symmetric-cluster to true , which is its default value. 32.6. Updating a RHEL high availability cluster Updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either individually or as a whole, can be done in one of two general ways: Rolling Updates : Remove one node at a time from service, update its software, then integrate it back into the cluster. This allows the cluster to continue providing service and managing resources while each node is updated. Entire Cluster Update : Stop the entire cluster, apply updates to all nodes, then start the cluster back up. Warning It is critical that when performing software update procedures for Red Hat Enterprise Linux High Availability and Resilient Storage clusters, you ensure that any node that will undergo updates is not an active member of the cluster before those updates are initiated. For a full description of each of these methods and the procedures to follow for the updates, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . 32.7. Upgrading remote nodes and guest nodes If the pacemaker_remote service is stopped on an active remote node or guest node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed. If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote . Procedure Stop the node's connection resource with the pcs resource disable resourcename command, which will move all services off the node. The connection resource would be the ocf:pacemaker:remote resource for a remote node or, commonly, the ocf:heartbeat:VirtualDomain resource for a guest node. For guest nodes, this command will also stop the VM, so the VM must be started outside the cluster (for example, using virsh ) to perform any maintenance. Perform the required maintenance. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable command. 32.8. Migrating VMs in a RHEL cluster Red Hat does not support live migration of active cluster nodes across hypervisors or hosts, as noted in Support Policies for RHEL High Availability Clusters - General Conditions with Virtualized Cluster Members . If you need to perform a live migration, you will first need to stop the cluster services on the VM to remove the node from the cluster, and then start the cluster back up after performing the migration. The following steps outline the procedure for removing a VM from a cluster, migrating the VM, and restoring the VM to the cluster. The following steps outline the procedure for removing a VM from a cluster, migrating the VM, and restoring the VM to the cluster. This procedure applies to VMs that are used as full cluster nodes, not to VMs managed as cluster resources (including VMs used as guest nodes) which can be live-migrated without special precautions. For general information about the fuller procedure required for updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either individually or as a whole, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . Note Before performing this procedure, consider the effect on cluster quorum of removing a cluster node. For example, if you have a three-node cluster and you remove one node, your cluster can not withstand any node failure. This is because if one node of a three-node cluster is already down, removing a second node will lose quorum. Procedure If any preparations need to be made before stopping or moving the resources or software running on the VM to migrate, perform those steps. Run the following command on the VM to stop the cluster software on the VM. Perform the live migration of the VM. Start cluster services on the VM. 32.9. Identifying clusters by UUID As of Red Hat Enterprise Linux 9.1, when you create a cluster it has an associated UUID. Since a cluster name is not a unique cluster identifier, a third-party tool such as a configuration management database that manages multiple clusters with the same name can uniquely identify a cluster by means of its UUID. You can display the current cluster UUID with the pcs cluster config [show] command, which includes the cluster UUID in its output. To add a UUID to an existing cluster, run the following command. To regenerate a UUID for a cluster with an existing UUID, run the following command. | [
"pcs node standby node | --all",
"pcs node unstandby node | --all",
"pcs resource move resource_id [ destination_node ] [--promoted] [--strict] [--wait[= n ]]",
"pcs resource relocate run [ resource1 ] [ resource2 ]",
"pcs resource disable resource_id [--wait[= n ]]",
"pcs resource enable resource_id [--wait[= n ]]",
"pcs resource ban resource_id [ node ] [--promoted] [lifetime= lifetime ] [--wait[= n ]]",
"pcs resource debug-start resource_id",
"pcs resource unmanage resource1 [ resource2 ]",
"pcs resource manage resource1 [ resource2 ]",
"pcs property set maintenance-mode=true",
"pcs property set maintenance-mode=false",
"pcs property unset property",
"pcs property set symmetric-cluster=",
"pcs resource disable resourcename",
"pcs resource enable resourcename",
"pcs cluster stop",
"pcs cluster start",
"pcs cluster config uuid generate",
"pcs cluster config uuid generate --force"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_cluster-maintenance-configuring-and-managing-high-availability-clusters |
3.15. Starting a Virtual Machine | 3.15. Starting a Virtual Machine This Ruby example starts a virtual machine. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] # Locate the service that manages the virtual machine, as that is where # the action methods are defined: vm_service = vms_service.vm_service(vm.id) # Call the "start" method of the service to start it: vm_service.start # Wait until the virtual machine status is UP: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::UP end For more information, see VmService:start . | [
"Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] Locate the service that manages the virtual machine, as that is where the action methods are defined: vm_service = vms_service.vm_service(vm.id) Call the \"start\" method of the service to start it: vm_service.start Wait until the virtual machine status is UP: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::UP end"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/starting_a_virtual_machine |
Chapter 4. Monitoring Debezium | Chapter 4. Monitoring Debezium You can use the JMX metrics provided by Apache Zookeeper , Apache Kafka , and Kafka Connect to monitor Debezium. To use these metrics, you must enable them when you start the Zookeeper, Kafka, and Kafka Connect services. Enabling JMX involves setting the correct environment variables. Note If you are running multiple services on the same machine, be sure to use distinct JMX ports for each service. 4.1. Metrics for monitoring Debezium connectors In addition to the built-in support for JMX metrics in Kafka, Zookeeper, and Kafka Connect, each connector provides additional metrics that you can use to monitor their activities. Db2 connector metrics MongoDB connector metrics MySQL connector metrics Oracle connector metrics PostgreSQL connector metrics SQL Server connector metrics 4.2. Enabling JMX in local installations With Zookeeper, Kafka, and Kafka Connect, you enable JMX by setting the appropriate environment variables when you start each service. 4.2.1. Zookeeper JMX environment variables Zookeeper has built-in support for JMX. When running Zookeeper using a local installation, the zkServer.sh script recognizes the following environment variables: JMXPORT Enables JMX and specifies the port number that will be used for JMX. The value is used to specify the JVM parameter -Dcom.sun.management.jmxremote.port=USDJMXPORT . JMXAUTH Whether JMX clients must use password authentication when connecting. Must be either true or false . The default is false . The value is used to specify the JVM parameter -Dcom.sun.management.jmxremote.authenticate=USDJMXAUTH . JMXSSL Whether JMX clients connect using SSL/TLS. Must be either true or false . The default is false . The value is used to specify the JVM parameter -Dcom.sun.management.jmxremote.ssl=USDJMXSSL . JMXLOG4J Whether the Log4J JMX MBeans should be disabled. Must be either true (default) or false . The default is true . The value is used to specify the JVM parameter -Dzookeeper.jmx.log4j.disable=USDJMXLOG4J . 4.2.2. Kafka JMX environment variables When running Kafka using a local installation, the kafka-server-start.sh script recognizes the following environment variables: JMX_PORT Enables JMX and specifies the port number that will be used for JMX. The value is used to specify the JVM parameter -Dcom.sun.management.jmxremote.port=USDJMX_PORT . KAFKA_JMX_OPTS The JMX options, which are passed directly to the JVM during startup. The default options are: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false 4.2.3. Kafka Connect JMX environment variables When running Kafka using a local installation, the connect-distributed.sh script recognizes the following environment variables: JMX_PORT Enables JMX and specifies the port number that will be used for JMX. The value is used to specify the JVM parameter -Dcom.sun.management.jmxremote.port=USDJMX_PORT . KAFKA_JMX_OPTS The JMX options, which are passed directly to the JVM during startup. The default options are: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false 4.3. Monitoring Debezium on OpenShift If you are using Debezium on OpenShift, you can obtain JMX metrics by opening a JMX port on 9999 . For information about configuring JMX connection options, see the KafkaJmxOptions schema reference in the Streams for Apache Kafka API Reference. In addition, you can use Prometheus and Grafana to monitor the JMX metrics. For more information, see Monitoring in Streams for Apache Kafka on OpenShift Overview and Setting up metrics and dashboards in Deploying and Managing Streams for Apache Kafka on OpenShift . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/monitoring-debezium |
Chapter 5. Jenkins | Chapter 5. Jenkins 5.1. Configuring Jenkins images OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x. The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream. But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins. 5.1.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plugin. Standard authentication provided by Jenkins. 5.1.1.1. OpenShift Container Platform OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Container Platform Login plugin, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server. Valid credentials are controlled by the OpenShift Container Platform identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Container Platform admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Container Platform pod, the login plugin looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plugin treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assigns the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plugin polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plugin polls OpenShift Container Platform. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 5.1.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ ocp-tools-4/jenkins-rhel8 5.1.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Container Platform Login plugin manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plugins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plugins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Container Platform Login plugin polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates the configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false AGENT_BASE_IMAGE Setting this value overrides the image used for the jnlp container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the jenkins-agent-base-rhel8:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest JAVA_BUILDER_IMAGE Setting this value overrides the image used for the java-builder container in the java-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the java:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/java:latest NODEJS_BUILDER_IMAGE Setting this value overrides the image used for the nodejs-builder container in the nodejs-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the nodejs:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/nodejs:latest JAVA_FIPS_OPTIONS Setting this value controls how the JVM operates when running on a FIPS node. For more information, see Configure OpenJDK 11 in FIPS mode . Default: -Dcom.redhat.fips=false 5.1.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project Jenkins must access: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 5.1.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 5.1.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plugins you want to copy into Jenkins. plugins.txt This file lists the plugins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration customizes the Jenkins image in OpenShift Container Platform apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 5.1.6. Configuring the Jenkins Kubernetes plugin The OpenShift Jenkins image includes the pre-installed Kubernetes plugin so that Jenkins agents can be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform. To use the Kubernetes plugin, OpenShift Container Platform provides images that are suitable for use as Jenkins agents, including the Base, Maven, and Node.js images. Important OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . OpenShift Container Platform 4.11 removes the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Both the Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plugin. That configuration includes labels for each of the images that can be applied to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image. Important In OpenShift Container Platform 4.10 and later, the recommended pattern for running Jenkins agents using the Kubernetes plugin is to use pod templates with both jnlp and sidecar containers. The jnlp container uses the OpenShift Container Platform Jenkins Base agent image to facilitate launching a separate pod for your build. The sidecar container image has the tools needed to build in a particular language within the separate pod that was launched. Many container images from the Red Hat Container Catalog are referenced in the sample image streams present in the openshift namespace. The OpenShift Container Platform Jenkins image has two pod templates named java-build and nodejs-builder with sidecar containers that demonstrate this approach. These two pod templates use the latest Java and NodeJS versions provided by the java and nodejs image streams in the openshift namespace. With this update, in OpenShift Container Platform 4.10 and later, the non-sidecar maven and nodejs pod templates for Jenkins are deprecated. These pod templates are planned for removal in a future release. Bug fixes and support are provided through the end of that future life cycle, after which no new feature enhancements will be made. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plugin. With the OpenShift Container Platform sync plugin, the Jenkins image on Jenkins startup searches for the following within the project that it is running or the projects specifically listed in the plugin's configuration: Image streams that have the label role set to jenkins-agent . Image stream tags that have the annotation role set to jenkins-agent . Config maps that have the label role set to jenkins-agent . When it finds an image stream with the appropriate label, or image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plugin configuration so you can assign your Jenkins jobs to run in a pod that runs the container image that is provided by the image stream. The name and image references of the image stream or image stream tag are mapped to the name and image fields in the Kubernetes plugin pod template. You can control the label field of the Kubernetes plugin pod template by setting an annotation on the image stream or image stream tag object with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, it assumes that any values in the key-value data payload of the config map contain Extensible Markup Language (XML) that is consistent with the configuration format for Jenkins and the Kubernetes plugin pod templates. One key benefit of using config maps, rather than image streams or image stream tags, is that you can control all the parameters of the Kubernetes plugin pod template. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  // Writer, remove or update this in 4.12 <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams that are present in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration. Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. Either creating appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or the adding of labels after their initial creation, leads to creating of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration and overrides any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 5.1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 5.1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 5.1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes Plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } 5.1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 5.1.11. Additional resources See Base image options for more information on the Red Hat Universal Base Images (UBI). 5.2. Jenkins agent OpenShift Container Platform provides Base, Maven, and Node.js images for use as Jenkins agents. The Base image for Jenkins agents does the following: Pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones, including git , tar , zip , and nss , among others. Establishes the JNLP agent as the entry point. Includes the oc client tooling for invoking command line operations from within Jenkins jobs. Provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. The Maven v3.5, Node.js v10, and Node.js v12 images extend the Base image. They provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the contrib and contrib/bin subdirectories, which enable you to insert configuration files and executable scripts for your image. Important Use a version of the agent image that is appropriate for your OpenShift Container Platform release version. Embedding an oc client version that is not compatible with the OpenShift Container Platform version can cause unexpected behavior. The OpenShift Container Platform Jenkins image also defines the following sample pod templates to illustrate how you can use these agent images with the Jenkins Kubernetes plugin: The maven pod template, which uses a single container that uses the OpenShift Container Platform Maven Jenkins agent image. The nodejs pod template, which uses a single container that uses the OpenShift Container Platform Node.js Jenkins agent image. The java-builder pod template, which employs two containers. One is the jnlp container, which uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. The second is the java container which uses the java OpenShift Container Platform Sample ImageStream, which contains the various Java binaries, including the Maven binary mvn , for building code. The nodejs-builder pod template, which employs two containers. One is the jnlp container, which uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. The second is the nodejs container which uses the nodejs OpenShift Container Platform Sample ImageStream, which contains the various Node.js binaries, including the npm binary, for building code. 5.2.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> USD docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 5.2.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 5.2.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 5.2.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 5.2.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plugin value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas. 5.3. Migrating from Jenkins to OpenShift Pipelines or Tekton You can migrate your CI/CD workflows from Jenkins to Red Hat OpenShift Pipelines , a cloud-native CI/CD experience based on the Tekton project. 5.3.1. Comparison of Jenkins and OpenShift Pipelines concepts You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. 5.3.1.1. Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: Pipeline : Automates the entire process of building, testing, and deploying applications by using Groovy syntax. Node : A machine capable of either orchestrating or executing a scripted pipeline. Stage : A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. Step : A single task that specifies the exact action to be taken, either by using a command or a script. 5.3.1.2. OpenShift Pipelines terminology OpenShift Pipelines uses YAML syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: Pipeline : A set of tasks in a series, in parallel, or both. Task : A sequence of steps as commands, binaries, or scripts. PipelineRun : Execution of a pipeline with one or more tasks. TaskRun : Execution of a task with one or more steps. Note You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. Workspace : In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: Storage of inputs, outputs, and build artifacts. Common space to share data among tasks. Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization. Note In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. 5.3.1.3. Mapping of concepts The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: Table 5.1. Jenkins and OpenShift Pipelines - basic comparison Jenkins OpenShift Pipelines Pipeline Pipeline and PipelineRun Stage Task Step A step in a task 5.3.2. Migrating a sample pipeline from Jenkins to OpenShift Pipelines You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. 5.3.2.1. Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } } 5.3.2.2. OpenShift Pipelines pipeline To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: Example build task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: ["make"] workingDir: USD(workspaces.source.path) Example test task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: ["make check"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path) Example deploy task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: ["make deploy"] workingDir: USD(workspaces.source.path) You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: Example: OpenShift Pipelines pipeline for building, testing, and deployment apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir 5.3.3. Migrating from Jenkins plugins to Tekton Hub tasks You can extend the capability of Jenkins by using plugins . To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from Tekton Hub . For example, consider the git-clone task in Tekton Hub, which corresponds to the git plugin for Jenkins. Example: git-clone task from Tekton Hub apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source 5.3.4. Extending OpenShift Pipelines capabilities using custom tasks and scripts In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. Example: A custom task for running the maven test command apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: ["mvn test"] workingDir: USD(workspaces.source.path) Example: Run a custom shell script by providing its path ... steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh ... Example: Run a custom Python script by writing it in the YAML file ... steps: image: python script: | #!/usr/bin/env python3 print("hello from python!") ... 5.3.5. Comparison of Jenkins and OpenShift Pipelines execution models Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. Table 5.2. Comparison of execution models in Jenkins and OpenShift Pipelines Jenkins OpenShift Pipelines Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes. OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. Containers are launched by the Jenkins controller node through the pipeline. OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). Extensibility is achieved by using plugins. Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. 5.3.6. Examples of common use cases Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: Compiling, building, and deploying images using Apache Maven Extending the core capabilities by using plugins Reusing shareable libraries and custom scripts 5.3.6.1. Running a Maven pipeline in Jenkins and OpenShift Pipelines You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins #!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' } Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: "USD(params.repo-url)" - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["-DskipTests", "clean", "compile"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["test"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["package"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd "USD(params.context-path)" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json 5.3.6.2. Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index . OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the Tekton Hub . In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. 5.3.6.3. Sharing reusable code in Jenkins and OpenShift Pipelines Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition. Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the Tekton Hub in combination with custom tasks and scripts. 5.3.7. Additional resources Understanding OpenShift Pipelines Role-based Access Control 5.4. Important changes to OpenShift Jenkins images OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io . It also removes the OpenShift Jenkins Maven and NodeJS Agent images from its payload: OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . OpenShift Container Platform 4.10 deprecated the OpenShift Jenkins Maven and NodeJS Agent images. OpenShift Container Platform 4.11 removes these images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . These changes support the OpenShift Container Platform 4.10 recommendation to use multiple container Pod Templates with the Jenkins Kubernetes Plugin . 5.4.1. Relocation of OpenShift Jenkins images OpenShift Container Platform 4.11 makes significant changes to the location and availability of specific OpenShift Jenkins images. Additionally, you can configure when and how to update these images. What stays the same with the OpenShift Jenkins images? The Cluster Samples Operator manages the ImageStream and Template objects for operating the OpenShift Jenkins images. By default, the Jenkins DeploymentConfig object from the Jenkins pod template triggers a redeployment when the Jenkins image changes. By default, this image is referenced by the jenkins:2 image stream tag of Jenkins image stream in the openshift namespace in the ImageStream YAML file in the Samples Operator payload. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the deprecated maven and nodejs pod templates are still in the default image configuration. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the jenkins-agent-maven and jenkins-agent-nodejs image streams still exist in your cluster. To maintain these image streams, see the following section, "What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace?" What changes in the support matrix of the OpenShift Jenkins image? Each new image in the ocp-tools-4 repository in the registry.redhat.io registry supports multiple versions of OpenShift Container Platform. When Red Hat updates one of these new images, it is simultaneously available for all versions. This availability is ideal when Red Hat updates an image in response to a security advisory. Initially, this change applies to OpenShift Container Platform 4.11 and later. It is planned that this change will eventually apply to OpenShift Container Platform 4.9 and later. Previously, each Jenkins image supported only one version of OpenShift Container Platform and Red Hat might update those images sequentially over time. What additions are there with the OpenShift Jenkins and Jenkins Agent Base ImageStream and ImageStreamTag objects? By moving from an in-payload image stream to an image stream that references non-payload images, OpenShift Container Platform can define additional image stream tags. Red Hat has created a series of new image stream tags to go along with the existing "value": "jenkins:2" and "value": "image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest" image stream tags present in OpenShift Container Platform 4.10 and earlier. These new image stream tags address some requests to improve how the Jenkins-related image streams are maintained. About the new image stream tags: ocp-upgrade-redeploy To update your Jenkins image when you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag corresponds to the existing 2 image stream tag of the jenkins image stream and the latest image stream tag of the jenkins-agent-base-rhel8 image stream. It employs an image tag specific to only one SHA or image digest. When the ocp-tools-4 image changes, such as for Jenkins security advisories, Red Hat Engineering updates the Cluster Samples Operator payload. user-maintained-upgrade-redeploy To manually redeploy Jenkins after you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the least specific image version indicator available. When you redeploy Jenkins, run the following command: USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift . When you issue this command, the OpenShift Container Platform ImageStream controller accesses the registry.redhat.io image registry and stores any updated images in the OpenShift image registry's slot for that Jenkins ImageStreamTag object. Otherwise, if you do not run this command, your Jenkins deployment configuration does not trigger a redeployment. scheduled-upgrade-redeploy To automatically redeploy the latest version of the Jenkins image when it is released, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the periodic importing of image stream tags feature of the OpenShift Container Platform image stream controller, which checks for changes in the backing image. If the image changes, for example, due to a recent Jenkins security advisory, OpenShift Container Platform triggers a redeployment of your Jenkins deployment configuration. See "Configuring periodic importing of image stream tags" in the following "Additional resources." What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace? The OpenShift Jenkins Maven and NodeJS Agent images for OpenShift Container Platform were deprecated in 4.10, and are removed from the OpenShift Container Platform install payload in 4.11. They do not have alternatives defined in the ocp-tools-4 repository. However, you can work around this by using the sidecar pattern described in the "Jenkins agent" topic mentioned in the following "Additional resources" section. However, the Cluster Samples Operator does not delete the jenkins-agent-maven and jenkins-agent-nodejs image streams created by prior releases, which point to the tags of the respective OpenShift Container Platform payload images on registry.redhat.io . Therefore, you can pull updates to these images by running the following commands: USD oc import-image jenkins-agent-nodejs -n openshift USD oc import-image jenkins-agent-maven -n openshift 5.4.2. Customizing the Jenkins image stream tag To override the default upgrade behavior and control how the Jenkins image is upgraded, you set the image stream tag value that your Jenkins deployment configurations use. The default upgrade behavior is the behavior that existed when the Jenkins image was part of the install payload. The image stream tag names, 2 and ocp-upgrade-redeploy , in the jenkins-rhel.json image stream file use SHA-specific image references. Therefore, when those tags are updated with a new SHA, the OpenShift Container Platform image change controller automatically redeploys the Jenkins deployment configuration from the associated templates, such as jenkins-ephemeral.json or jenkins-persistent.json . For new deployments, to override that default value, you change the value of the JENKINS_IMAGE_STREAM_TAG in the jenkins-ephemeral.json Jenkins template. For example, replace the 2 in "value": "jenkins:2" with one of the following image stream tags: ocp-upgrade-redeploy , the default value, updates your Jenkins image when you upgrade OpenShift Container Platform. user-maintained-upgrade-redeploy requires you to manually redeploy Jenkins by running USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift after upgrading OpenShift Container Platform. scheduled-upgrade-redeploy periodically checks the given  // Writer, remove or update this in 4.12 <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json",
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/cicd/jenkins |
Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster | Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. 10.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.15, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 10.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base operating system: Use RHEL 8.8 or a later version with the minimal installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 10.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct RHEL version needed for the compute machines is selected. 10.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.8 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.8*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.8*" , then RHEL 8.8 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.8 or a later version of RHEL 8. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 10.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.15: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.15-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 10.5. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 10.6. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 10.7. Adding more RHEL compute machines to your cluster You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.15 cluster. Prerequisites Your OpenShift Container Platform cluster already contains RHEL compute nodes. The hosts file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. The kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. You must prepare the RHEL hosts for installation. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. If you use SSH key-based authentication, you must manage the key with an SSH agent. Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Procedure Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables. Rename the [new_workers] section of the file to [workers] . Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: In this example, the mycluster-rhel8-0.example.com and mycluster-rhel8-1.example.com machines are in the cluster and you add the mycluster-rhel8-2.example.com and mycluster-rhel8-3.example.com machines. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the scaleup playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 10.8. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.9. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. | [
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.15-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/more-rhel-compute |
4.10. Enterprise Information System Support | 4.10. Enterprise Information System Support The underlying resource adaptors that represent the Enterprise Information System (EIS) and the EIS itself must support XA transactions to be able to participate in distributed XA transaction using JBoss Data Virtualization. If a source system does not support the XA, then it can not participate in the distributed transaction. However, the source is still eligible to participate in data integration without XA support. Participation in an XA transaction is automatically determined based on the resource adaptor's XA capability. It is the developer's responsibility to ensure they configure an XA resource when they require them to participate in distributed transactions. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/enterprise_information_system_support1 |
Chapter 1. Introduction to Ceph block devices | Chapter 1. Introduction to Ceph block devices A block is a set length of bytes in a sequence, for example, a 512-byte block of data. Combining many blocks together into a single file can be used as a storage device that you can read from and write to. Block-based storage interfaces are the most common way to store data with rotating media such as: Hard drives CD/DVD discs Floppy disks Traditional 9-track tapes The ubiquity of block device interfaces makes a virtual block device an ideal candidate for interacting with a mass data storage system like Red Hat Ceph Storage. Ceph block devices are thin-provisioned, resizable and store data striped over multiple Object Storage Devices (OSD) in a Ceph storage cluster. Ceph block devices are also known as Reliable Autonomic Distributed Object Store (RADOS) Block Devices (RBDs). Ceph block devices leverage RADOS capabilities such as: Snapshots Replication Data consistency Ceph block devices interact with OSDs by using the librbd library. Ceph block devices deliver high performance with infinite scalability to Kernel Virtual Machines (KVMs), such as Quick Emulator (QEMU), and cloud-based computing systems, like OpenStack, that rely on the libvirt and QEMU utilities to integrate with Ceph block devices. You can use the same storage cluster to operate the Ceph Object Gateway and Ceph block devices simultaneously. Important To use Ceph block devices, requires you to have access to a running Ceph storage cluster. For details on installing a Red Hat Ceph Storage cluster, see the Red Hat Ceph Storage Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/block_device_guide/introduction-to-ceph-block-devices_block |
Migrating to Identity Management on RHEL 9 | Migrating to Identity Management on RHEL 9 Red Hat Enterprise Linux 9 Upgrading a RHEL 8 IdM environment to RHEL 9 and migrating external LDAP solutions to IdM Red Hat Customer Content Services | [
"ipa config-show | grep \"CA renewal\" IPA CA renewal master: rhel8.example.com",
"dnf update ipa- *",
"ntpstat synchronised to NTP server ( ntp.example.com ) at stratum 3 time correct to within 42 ms polling server every 1024 s",
"dnf update ipa- *",
"ipa server-role-find --status enabled --server rhel8.example.com ---------------------- 3 server roles matched ---------------------- Server name: rhel8.example.com Role name: CA server Role status: enabled Server name: rhel8.example.com Role name: DNS server Role status: enabled [... output truncated ...]",
"ipa dnsserver-show rhel8.example.com ----------------------------- 1 DNS server matched ----------------------------- Server name: rhel8.example.com SOA mname: rhel8.example.com. Forwarders: 192.0.2.20 Forward policy: only -------------------------------------------------- Number of entries returned 1 --------------------------------------------------",
"ipa-replica-install --setup-ca --ip-address 192.0.2.1 --setup-dns --forwarder 192.0.2.20 --ntp-server ntp.example.com",
"ipactl status Directory Service: RUNNING [... output truncated ...] ipa: INFO: The ipactl command was successful",
"kinit admin ipa server-role-find --status enabled --server rhel9.example.com ---------------------- 2 server roles matched ---------------------- Server name: rhel9.example.com Role name: CA server Role status: enabled Server name: rhel9.example.com Role name: DNS server Role status: enabled",
"ipa-csreplica-manage list --verbose rhel9.example.com Directory Manager password: rhel8.example.com last init status: None last init ended: 1970-01-01 00:00:00+00:00 last update status: Error (0) Replica acquired successfully: Incremental update succeeded last update ended: 2019-02-13 13:55:13+00:00",
"id [email protected]",
"chronyc tracking Reference ID : CB00710F ( ntp.example.com ) Stratum : 3 Ref time (UTC) : Wed Feb 16 09:49:17 2022 [... output truncated ...]",
"ipa config-mod --ca-renewal-master-server rhel9.example.com IPA masters: rhel8.example.com, rhel9.example.com IPA CA servers: rhel8.example.com, rhel9.example.com IPA CA renewal master: rhel9.example.com",
"[user@rhel9 ~]USD ipactl restart",
"ca.certStatusUpdateInterval=0",
"[user@rhel8 ~]USD ipactl restart",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2021-10-31 12:00:00 Last CRL Number: 6 The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage disable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd CRL generation disabled on the local host. Please make sure to configure CRL generation on another master with ipa-crlgen-manage enable. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status",
"ipa-crlgen-manage enable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd Forcing CRL update CRL generation enabled on the local host. Please make sure to have only a single CRL generation master. The ipa-crlgen-manage command was successful",
"ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2021-10-31 12:10:00 Last CRL Number: 7 The ipa-crlgen-manage command was successful",
"ipa user-add random_user First name: random Last name: user",
"ipa user-find random_user -------------- 1 user matched -------------- User login: random_user First name: random Last name: user",
"ipa user-add another_random_user First name: another Last name: random_user",
"ipa idrange-find ---------------- 3 ranges matched ---------------- Range name: EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 100000000 Range type: local domain range",
"ipa-replica-manage dnarange-show rhel8.example.com: 196600026-196799999 rhel9.example.com: No range set",
"ipa-replica-manage dnarange-set rhel8.example.com 196600026-196699999",
"ipa-replica-manage dnarange-set rhel9.example.com 196700000-196799999",
"ipactl stop Stopping CA Service Stopping pki-ca: [ OK ] Stopping HTTP Service Stopping httpd: [ OK ] Stopping MEMCACHE Service Stopping ipa_memcached: [ OK ] Stopping DNS Service Stopping named: [ OK ] Stopping KPASSWD Service Stopping Kerberos 5 Admin Server: [ OK ] Stopping KDC Service Stopping Kerberos 5 KDC: [ OK ] Stopping Directory Service Shutting down dirsrv: EXAMPLE-COM... [ OK ] PKI-IPA... [ OK ]",
"ntpstat synchronised to NTP server ( ntp.example.com ) at stratum 3 time correct to within 42 ms polling server every 1024 s",
"dnf update ipa- *",
"[migrated_idm_user@idmclient ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:",
"ldapmodify -x -D 'cn=directory manager' -w password -h ipaserver.example.com -p 389 dn: cn=config changetype: modify replace: nsslapd-sasl-max-buffer-size nsslapd-sasl-max-buffer-size: 4194304 modifying entry \"cn=config\"",
"ulimit -u 4096",
"ipa migrate-ds ldap://ldap.example.com:389",
"ipa migrate-ds ldap://ldap.example.com:389 --bind-dn=cn=Manager,dc=example,dc=com",
"ipa migrate-ds --base-dn=\"ou=people,dc=example,dc=com\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389",
"ipa migrate-ds --group-objectclass=groupOfNames --group-objectclass=groupOfUniqueNames ldap://ldap.example.com:389",
"ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=idmuser101 --exclude-users=idmuser102 ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith --exclude-users=bjensen --exclude-users=mreynolds ldap://ldap.example.com:389",
"ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389",
"ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389",
"ipa-compat-manage disable",
"systemctl restart dirsrv.target",
"ipa config-mod --enable-migration=TRUE",
"ipa migrate-ds --your-options ldap://ldap.example.com:389",
"ipa migrate-ds --your-options --with-compat ldap://ldap.example.com:389",
"ipa-compat-manage enable",
"systemctl restart dirsrv.target",
"ipa config-mod --enable-migration=FALSE",
"ipa-client-install --enable-dns-update",
"https://ipaserver.example.com/ipa/migration",
"ldapsearch -LL -x -D 'cn=Directory Manager' -w secret -b 'cn=users,cn=accounts,dc=example,dc=com' '(&(!(krbprincipalkey= ))(userpassword= ))' uid",
"ipa user-show --all testing_user dn: uid=testing_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com User login: testing_user First name: testing Last name: user Full name: testing user Display name: testing user Initials: tu Home directory: /home/testing_user GECOS: testing user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1689700012 GID: 1689700012 Account disabled: False Preserved user: False Password: False Member of groups: ipausers Kerberos keys available: False ipauniqueid: 843b1ac8-6e38-11ec-8dfe-5254005aad3e mepmanagedentry: cn=testing_user,cn=groups,cn=accounts,dc=idm,dc=example,dc=com objectclass: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser, ipaSshGroupOfPubKeys, mepOriginEntry",
"ipa migrate-ds --ca-cert-file= /tmp/remote.crt --your-other-options ldaps:// ldap.example.com :636",
"ipa user-show --all testing_user dn: uid=testing_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com User login: testing_user First name: testing Last name: user Full name: testing user Display name: testing user Initials: tu Home directory: /home/testing_user GECOS: testing user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1689700012 GID: 1689700012 Account disabled: False Preserved user: False Password: False Member of groups: ipausers Kerberos keys available: False ipauniqueid: 843b1ac8-6e38-11ec-8dfe-5254005aad3e mepmanagedentry: cn=testing_user,cn=groups,cn=accounts,dc=idm,dc=example,dc=com objectclass: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser, ipaSshGroupOfPubKeys, mepOriginEntry"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/migrating_to_identity_management_on_rhel_9/index |
Migrating Cryostat 2.4 to Cryostat 3.0 | Migrating Cryostat 2.4 to Cryostat 3.0 Red Hat build of Cryostat 3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/migrating_cryostat_2.4_to_cryostat_3.0/index |
Chapter 6. Registering the System and Managing Subscriptions | Chapter 6. Registering the System and Managing Subscriptions The subscription service provides a mechanism to handle Red Hat software inventory and allows you to install additional software or update already installed programs to newer versions using the yum or PackageKit package managers. In Red Hat Enterprise Linux 6 the recommended way to register your system and attach subscriptions is to use Red Hat Subscription Management . Note It is also possible to register the system and attach subscriptions after installation during the firstboot process. For detailed information about firstboot see the Firstboot chapter in the Installation Guide for Red Hat Enterprise Linux 6. Note that firstboot is only available on systems after a graphical installation or after a kickstart installation where a desktop and the X window system were installed and graphical login was enabled. 6.1. Registering the System and Attaching Subscriptions Complete the following steps to register your system and attach one or more subscriptions using Red Hat Subscription Management. Note that all subscription-manager commands are supposed to be run as root . Run the following command to register your system. You will be prompted to enter your user name and password. Note that the user name and password are the same as your login credentials for Red Hat Customer Portal. subscription-manager register Determine the pool ID of a subscription that you require. To do so, type the following at a shell prompt to display a list of all subscriptions that are available for your system: subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to your subscription. To list subscriptions for all architectures, add the --all option. The pool ID is listed on a line beginning with Pool ID . Attach the appropriate subscription to your system by entering a command as follows: subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, at any time, run: subscription-manager list --consumed Note If you use a firewall or a proxy, you may need additional configuration to allow yum and subscription-manager to work correctly. Refer to the "Setting Firewall Access for Content Delivery" section of the Red Hat Enterprise Linux 6 Subscription Management guide if you use a firewall and to the "Using an HTTP Proxy" section if you use a proxy. For more details on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the designated solution article . For comprehensive information about subscriptions, see the Red Hat Subscription Management collection of guides. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/chap-subscription_and_support-registering_a_system_and_managing_subscriptions |
Chapter 12. Security | Chapter 12. Security GSSAPI key-exchange algorithms can now be selectively disabled In view of the Logjam security vulnerability, the gss-group1-sha1-* key-exchange methods are no longer considered secure. While there was the possibility to disable this key-exchange method as a normal key exchange, it was not possible to disable it as a GSSAPI key exchange. With this update, the administrator can selectively disable this or other algorithms used by the GSSAPI key exchange. SELinux policy for Red Hat Gluster Storage has been added Previously, SELinux policy for Red Hat Gluster Storage (RHGS) components was missing, and Gluster worked correctly only when SELinux was in permissive mode. With this update, SELinux policy rules for the glusterd (glusterFS Management Service), glusterfsd (NFS sever), smbd , nfsd , rpcd , adn ctdbd processes have been updated providing SELinux support for Gluster. openscap rebase to version 1.2.5 The openscap packages have been upgraded to upstream version 1.2.5, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: * Support for OVAL version 5.11, which brings multiple improvements such as for systemd properties * Introduced native support of xml.bz2 input files * Introduced the oscap-ssh tool for assessing remote systems * Introduced the oscap-docker tool for assessing containers/images scap-security-guide rebase to version 0.1.25 The scap-security-guide tool has been upgraded to upstream version 0.1.25, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: * New security profiles for Red Hat Enterprise Linux 7 Server: Common Profile for General-Purpose Systems, Draft PCI-DSS v3 Control Baseline, Standard System Security Profile, and Draft STIG for Red Hat Enterprise Linux 7 Server. * New security benchmarks for Firefox and Java Runtime Environment (JRE) components running on Red Hat Enterprise Linux 6 and 7. * New scap-security-guide-doc subpackage, which contains HTML-formatted documents containing security guides generated from XCCDF benchmarks (for every security profile shipped in security benchmarks for Red Hat Enterprise Linux 6 and 7, Firefox, and JRE). | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/security |
6.3. Deployment | 6.3. Deployment 389-ds-base component, BZ# 878111 The ns-slapd utility terminates unexpectedly if it cannot rename the dirsrv- <instance> log files in the /var/log/ directory due to incorrect permissions on the directory. cpuspeed component, BZ# 626893 Some HP Proliant servers may report incorrect CPU frequency values in /proc/cpuinfo or /sys/device/system/cpu/*/cpufreq . This is due to the firmware manipulating the CPU frequency without providing any notification to the operating system. To avoid this ensure that the HP Power Regulator option in the BIOS is set to OS Control . An alternative available on more recent systems is to set Collaborative Power Control to Enabled . releng component, BZ# 644778 Some packages in the Optional repositories on RHN have multilib file conflicts. Consequently, these packages cannot have both the primary architecture (for example, x86_64) and secondary architecture (for example, i686) copies of the package installed on the same machine simultaneously. To work around this issue, install only one copy of the conflicting package. grub component, BZ# 695951 On certain UEFI-based systems, you may need to type BOOTX64 rather than bootx64 to boot the installer due to case sensitivity issues. grub component, BZ# 698708 When rebuilding the grub package on the x86_64 architecture, the glibc-static.i686 package must be used. Using the glibc-static.x86_64 package will not meet the build requirements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/deployment_issues |
Chapter 11. Runtime Updates | Chapter 11. Runtime Updates 11.1. Data Updates Data change events are used by Red Hat JBoss Data Virtualization to invalidate resultset cache entries. Resultset cache entries are tracked by the tables that contributed to their results. By default, Red Hat JBoss Data Virtualization will capture internal data events against physical sources and distribute them across the cluster. This approach has a couple of limitations. First, updates are scoped only to their originating VDB/version. Second, updates made outside of Red Hat JBoss Data Virtualization are not captured. To increase data consistency, external change data capture tools can be used to send events to Red Hat JBoss Data Virtualization. From within a cluster the org.teiid.events.EventDistributorFactory and org.teiid.events.EventDistributor can be used to distribute change events. The EventDistributorFactory can be looked up by its name "teiid/event-distributor-factory". See the example below. InitialContext ctx = new InitialContext(); EventDistributorFactory edf = (EventDistributorFactory)ctx.lookup("teiid/event-distributor-factory"); EventDistributor ed = edf.getEventDistributor(); ed.dataModification(vdbName, vdbVersion, schema, tableName); This will distribute a change event for schema.tableName in VDB vdbName.vdbVersion. When externally capturing all update events, the "detect-change-events" property in the "teiid" subsystem can be set to false, so change events will not be duplicated. By default, this property is set to true. Use of other EventDistributor methods to manually distribute other events is not always necessary. See System Procedures in Red Hat JBoss Development Guide: Reference Material for SQL based updates. Note Using the org.teiid.events.EventDistributor interface you can also update runtime metadata. Refer to the API. | [
"InitialContext ctx = new InitialContext(); EventDistributorFactory edf = (EventDistributorFactory)ctx.lookup(\"teiid/event-distributor-factory\"); EventDistributor ed = edf.getEventDistributor(); ed.dataModification(vdbName, vdbVersion, schema, tableName);"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-runtime_updates |
Chapter 1. Kubernetes overview | Chapter 1. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 1.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 1.1. Kubernetes components Table 1.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 1.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 1.2. Kubernetes cluster overview Table 1.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 1.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 1.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/getting_started/kubernetes-overview |
Chapter 3. Using the Red Hat OpenShift Service on AWS dashboard to get cluster information | Chapter 3. Using the Red Hat OpenShift Service on AWS dashboard to get cluster information The Red Hat OpenShift Service on AWS web console captures high-level information about the cluster. 3.1. About the Red Hat OpenShift Service on AWS dashboards page Access the Red Hat OpenShift Service on AWS dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the Red Hat OpenShift Service on AWS web console. The Red Hat OpenShift Service on AWS dashboard provides various cluster information, captured in individual dashboard cards. The Red Hat OpenShift Service on AWS dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/web_console/using-dashboard-to-get-cluster-info |
Chapter 40. Clustering | Chapter 40. Clustering The pcs tool now manages bundle resources in Pacemaker As a Technology Preview starting with Red Hat Enterprise Linux 7.4, the pcs tool supports bundle resources. You can now use the pcs resource bundle create and the pcs resource bundle update commands to create and modify a bundle. You can add a resource to an existing bundle with the pcs resource create command. For information on the parameters you can set for a bundle resource, run the pcs resource bundle --help command. (BZ# 1433016 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1476401) Heuristics supported in corosync-qdevice as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. (BZ# 1413573 , BZ# 1389209 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology_previews_clustering |
7.8.2. Useful Websites | 7.8.2. Useful Websites http://www.netfilter.org/ - The official homepage of the Netfilter and iptables project. http://www.tldp.org/ - The Linux Documentation Project contains several useful guides relating to firewall creation and administration. http://www.iana.org/assignments/port-numbers - The official list of registered and common service ports as assigned by the Internet Assigned Numbers Authority. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-firewall-moreinfo-web |
5.38. crash-trace-command | 5.38. crash-trace-command 5.38.1. RHBA-2012:0808 - crash-trace-command bug fix update An updated crash-trace-command package that fixes one bug is now available for Red Hat Enterprise Linux 6. The crash-trace-command package provides a trace extension module for the crash utility, allowing it to read ftrace data from a core dump file. Bug Fix BZ# 729018 Previously, the "trace.so" binary in the crash-trace-command package was compiled by the GCC compiler without the "-g" option. Therefore, no debugging information was included in its associated "trace.so.debug" file. This could affect a crash analysis performed by the Automatic Bug Reporting Tool (ABRT) and its retrace server. Also, proper debugging of crashes using the GDB utility was not possible under these circumstances. This update modifies the Makefile of crash-trace-command to compile the "trace.so" binary with the "RPM_OPT_FLAGS" flag, which ensures that the GCC's "-g" option is used during the compilation. Debugging and a crash analysis can now be performed as expected. All users of crash-trace-command are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/crash-trace-command |
A.7. Performance Co-Pilot (PCP) | A.7. Performance Co-Pilot (PCP) Performance Co-Pilot (PCP) provides a large number of command-line tools, graphical tools, and libraries. For more information on these tools, see their respective manual pages. Table A.1. System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7 Name Description pmcd The Performance Metric Collector Daemon (PMCD). pmie The Performance Metrics Inference Engine. pmlogger The performance metrics logger. pmmgr Manages a collection of PCP daemons for a set of discovered local and remote hosts running the Performance Metric Collector Daemon (PMCD) according to zero or more configuration directories. pmproxy The Performance Metric Collector Daemon (PMCD) proxy server. pmwebd Binds a subset of the Performance Co-Pilot client API to RESTful web applications using the HTTP protocol. Table A.2. Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7 Name Description pcp Displays the current status of a Performance Co-Pilot installation. pmatop Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network. pmchart Plots performance metrics values available through the facilities of the Performance Co-Pilot. pmclient Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI). pmcollectl Collects and displays system-level data, either from a live system or from a Performance Co-Pilot archive file. pmconfig Displays the values of configuration parameters. pmdbg Displays available Performance Co-Pilot debug control flags and their values. pmdiff Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions. pmdumplog Displays control, metadata, index, and state information from a Performance Co-Pilot archive file. pmdumptext Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive. pmerr Displays available Performance Co-Pilot error codes and their corresponding error messages. pmfind Finds PCP services on the network. pmie An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmieconf Displays or sets configurable pmie variables. pminfo Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmiostat Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x dm option). pmlc Interactively configures active pmlogger instances. pmlogcheck Identifies invalid data in a Performance Co-Pilot archive file. pmlogconf Creates and modifies a pmlogger configuration file. pmloglabel Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file. pmlogsummary Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file. pmprobe Determines the availability of performance metrics. pmrep Reports on selected, easily customizable, performance metrics values. pmsocks Allows access to a Performance Co-Pilot hosts through a firewall. pmstat Periodically displays a brief summary of system performance. pmstore Modifies the values of performance metrics. pmtrace Provides a command line interface to the trace Performance Metrics Domain Agent (PMDA). pmval Displays the current value of a performance metric. Table A.3. PCP Metric Groups for XFS Metric Group Metrics provided xfs.* General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. xfs.allocs.* xfs.alloc_btree.* Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. xfs.block_map.* xfs.bmap_tree.* Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. xfs.dir_ops.* Counters for directory operations on XFS file systems for creation, entry deletions, count of "getdent" operations. xfs.transactions.* Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. xfs.inode_ops.* Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. xfs.log.* xfs.log_tail.* Counters for the number of log buffer writes over XFS file sytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. xfs.xstrat.* Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. xfs.attr.* Counts for the number of attribute get, set, remove and list operations over all XFS file systems. xfs.quota.* Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. xfs.buffer.* Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. xfs.btree.* Metrics regarding the operations of the XFS btree. xfs.control.reset Configuration metrics which are used to reset the metric counters for the XFS stats. Control metrics are toggled by means of the pmstore tool. Table A.4. PCP Metric Groups for XFS per Device Metric Group Metrics provided xfs.perdev.* General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. xfs.perdev.allocs.* xfs.perdev.alloc_btree.* Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. xfs.perdev.block_map.* xfs.perdev.bmap_tree.* Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. xfs.perdev.dir_ops.* Counters for directory operations of XFS file systems for creation, entry deletions, count of "getdent" operations. xfs.perdev.transactions.* Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. xfs.perdev.inode_ops.* Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. xfs.perdev.log.* xfs.perdev.log_tail.* Counters for the number of log buffer writes over XFS filesytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. xfs.perdev.xstrat.* Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. xfs.perdev.attr.* Counts for the number of attribute get, set, remove and list operations over all XFS file systems. xfs.perdev.quota.* Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. xfs.perdev.buffer.* Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. xfs.perdev.btree.* Metrics regarding the operations of the XFS btree. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-performance_co_pilot_pcp |
Chapter 11. Socket options in RHEL for Real Time | Chapter 11. Socket options in RHEL for Real Time The real-time socket is a two way data transfer mechanism between two processes on same systems such as the UNIX domain and loopback devices or on different systems such as network sockets. Transmission Control Protocol (TCP) is the most common transport protocol and is often used to achieve consistent low latency for a service that requires constant communication or to cork the sockets in a low priority restricted environment. With new applications, hardware features, and kernel architecture optimizations, TCP has to introduce new approaches to handle the changes effectively. The new approaches can cause unstable program behaviors. Because the program behavior changes as the underlying operating system components change, they must be handled with care. One example of such behavior in TCP is the delay in sending small buffers. This allows sending them as one network packet. Buffering small writes to TCP and sending them all at once generally works well, but it can also create latencies. For real-time applications, the TCP_NODELAY socket option disables the delay and sends small writes as soon as they are ready. The relevant socket options for data transfer are TCP_NODELAY and TCP_CORK . 11.1. TCP_NODELAY socket option The TCP_NODELAY socket option disables Nagle's algorithm. Configuring TCP_NODELAY with the setsockopt sockets API function sends multiple small buffer writes as individual packets as soon as they are ready. Sending multiple logically related buffers as a single packet by building a contiguous packet before sending, achieves better latency and performance. Alternatively, if the memory buffers are logically related but not contiguous, you can create an I/O vector and pass it to the kernel using writev on a socket with TCP_NODELAY enabled. The following example illustrates enabling TCP_NODELAY through the setsockopt sockets API. Note To use TCP_NODELAY effectively, avoid small, logically related buffer writes. With TCP_NODELAY , small writes make TCP send multiple buffers as individual packets, which may result in poor overall performance. Additional resources sendfile(2) man page on your system 11.2. TCP_CORK socket option The TCP_CORK option collects all data packets in a socket and prevents from transmitting them until the buffer fills to a specified limit. This enables applications to build a packet in the kernel space and send data when TCP_CORK is disabled. TCP_CORK is set on a socket file descriptor using the setsocketopt() function. When developing programs, if you must send bulk data from a file, consider using TCP_CORK with the sendfile() function. When a logical packet is built in the kernel by various components, enable TCP_CORK by configuring it to a value of 1 using the setsockopt sockets API. This is known as "corking the socket". TCP_CORK can cause bugs if the cork is not removed at an appropriate time. The following example illustrates enabling TCP_CORK through the setsockopt sockets API. In some environments, if the kernel is not able to identify when to remove the cork, you can manually remove it as follows: Additional resources sendfile(2) man page on your system 11.3. Example programs using socket options The TCP_NODELAY and TCP_CORK socket options significantly influence the behavior of a network connection. TCP_NODELAY disables the Nagle's algorithm on applications that benefit by sending data packets as soon as they are ready. With TCP_CORK , you can transfer multiple data packets simultaneously, with no delays between them. Note To enable the socket options, for example TCP_NODELAY , build it with the following code and then set appropriate options. When you run the tcp_nodelay_server and tcp_nodelay_client programs without any arguments, the client uses the default socket options. For more information about tcp_nodelay_server and tcp_nodelay_client programs, see the Red Hat Knowledgebase solution TCP changes result in latency performance when small buffers are used . The example programs provide information about the performance impact these socket options can have on your applications. Performance impact on a client You can send small buffer writes to a client without using the TCP_NODELAY and TCP_CORK socket options. When run without any arguments, the client uses the default socket options. To initiate data transfer, define the server TCP port and the number of packets it must process. For example, 10,000 packets in this test. The code sends 15 packets, each of two bytes, and waits for a response from the server. It adopts the default TCP behavior here Performance impact on a loopback interface To enable the socket option, build it using gcc tcp_nodelay_client.c -o tcp_nodelay_client -lrt and then set the appropriate options. Following examples use a loopback interface to demonstrate three variations: To send buffer writes immediately, set the no_delay option on a socket configured with TCP_NODELAY . TCP sends the buffers right away, disabling the algorithm that combines the small packets. This improves performance but can cause a flurry of small packets to be sent for each logical packet. To collect multiple data packets and send them with one system call, configure the TCP_CORK socket option. Using the cork technique significantly reduces the time required to send data packets as it combines full logical packets in its buffers and sends fewer overall network packets. You must ensure to remove the cork at the appropriate time. When developing programs, if you must send bulk data from a file, consider using TCP_CORK with the sendfile() option. To measure performance without using socket options. This is the baseline measure when TCP combines buffer writes and waits to check for more data than can optimally fit in the network packet. Additional resources sendfile(2) man page on your system | [
"int one = 1; setsockopt(descriptor, SOL_TCP, TCP_NODELAY, &one, sizeof(one));",
"int one = 1; setsockopt(descriptor, SOL_TCP, TCP_CORK, &one, sizeof(one));",
"int zero = 0; setsockopt(descriptor, SOL_TCP, TCP_CORK, &zero, sizeof(zero));",
"gcc tcp_nodelay_client.c -o tcp_nodelay_client -lrt",
"./tcp_nodelay_server 5001 10000",
"./tcp_nodelay_client localhost 5001 10000 no_delay 10000 packets of 30 bytes sent in 1649.771240 ms: 181.843399 bytes/ms using TCP_NODELAY",
"./tcp_nodelay_client localhost 5001 10000 cork 10000 packets of 30 bytes sent in 850.796448 ms: 352.610779 bytes/ms using TCP_CORK",
"./tcp_nodelay_client localhost 5001 10000 10000 packets of 30 bytes sent in 400129.781250 ms: 0.749757 bytes/ms"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_socket-options-in-rhel-for-real-time_understanding-rhel-for-real-time-core-concepts |
Chapter 1. Ansible development tools | Chapter 1. Ansible development tools Ansible development tools ( ansible-dev-tools ) is a suite of tools provided with Ansible Automation Platform to help automation creators to create, test, and deploy playbook projects, execution environments, and collections. The Ansible VS Code extension by Red Hat integrates most of the Ansible development tools: you can use these tools from the VS Code user interface. Use Ansible development tools during local development of playbooks, local testing, and in a CI pipeline (linting and testing). This document describes how to use Ansible development tools to create a playbook project that contains playbooks and roles that you can reuse within the project. It also describes how to test the playbooks and deploy the project on your {AAP} instance so that you can use the playbooks in automation jobs. 1.1. Ansible development tools components You can operate some Ansible development tools from the VS Code UI when you have installed the Ansible extension, and the remainder from the command line. VS Code is a free open-source code editor available on Linux, Mac, and Windows. Ansible VS Code extension This is not packaged with the Ansible Automation Platform RPM package, but it is an integral part of the automation creation workflow. From the VS Code UI, you can use the Ansible development tools for the following tasks: Scaffold directories for a playbook project or a collection. Write playbooks with the help of syntax highlighting and auto-completion. Debug your playbooks with a linter. Execute playbooks with Ansible Core using ansible-playbook . Execute playbooks in an execution environment with ansible-navigator . From the VS Code extension, you can also connect to Red Hat Ansible Lightspeed with IBM watsonx Code Assistant. Command-line Ansible development tools You can perform the following tasks with Ansible development tools from the command line, including the terminal in VS Code: Create an execution environment. Test your playbooks, roles, modules, plugins and collections. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/devtools-intro |
Appendix A. List of tickets by component | Appendix A. List of tickets by component Bugzilla and JIRA IDs are listed in this document for reference. Bugzilla bugs that are publicly accessible include a link to the ticket. Component Tickets 389-ds-base BZ#1816862 , BZ#1638875 , BZ#1728943 NetworkManager BZ#1814746 , BZ#1626348 anaconda BZ#1665428, BZ#1775975 , BZ#1630299, BZ#1823578, BZ#1672405 , BZ#1644662, BZ#1745064, BZ#1821192, BZ#1822880 , BZ#1862116 , BZ#1890261, BZ#1891827, BZ#1691319 , BZ#1931069 apr BZ#1819607 authselect BZ#1654018 bcc BZ#1837906 bind BZ#1818785 buildah-container BZ#1627898 buildah BZ#1806044 clevis BZ#1716040 , BZ#1818780 , BZ#1436735, BZ#1819767 cloud-init BZ#1750862 cloud-utils-growpart BZ#1846246 cockpit-session-recording BZ#1826516 cockpit BZ#1710731 , BZ#1666722 corosync-qdevice BZ#1784200 crun BZ#1841438 crypto-policies BZ#1832743 , BZ#1660839 cyrus-sasl BZ#1817054 distribution BZ#1815402, BZ#1657927 dnf BZ#1793298 , BZ#1832869 , BZ#1842285 elfutils BZ#1804321 fapolicyd BZ#1897090 , BZ#1817413 , BZ#1714529 fence-agents BZ#1830776 , BZ#1775847 firewalld BZ#1790948 , BZ#1682913 , BZ#1809225 , BZ#1817205 , BZ#1809636 freeradius BZ#1672285 , BZ#1859527 , BZ#1723362 gcc-toolset-10-gdb BZ#1838777 gcc BZ#1784758 gdb BZ#1659535 git BZ#1825114 glibc BZ#1812756 , BZ#1743445, BZ#1783303, BZ#1642150, BZ#1810146 , BZ#1748197 , BZ#1774115 , BZ#1807824 , BZ#1757354 , BZ#1836867 , BZ#1780204, BZ#1821531, BZ#1784525 gnome-session BZ#1739556 gnome-shell-extensions BZ#1717947 gnome-shell BZ#1724302 gnome-software BZ#1668760 gnutls BZ#1677754 , BZ#1789392 , BZ#1849079 , BZ#1855803 go-toolset BZ#1820596 gpgme BZ#1829822 grafana-container BZ#1823834 grafana-pcp BZ#1807099 grafana BZ#1807323 grub2 BZ#1583445 httpd BZ#1209162 initial-setup BZ#1676439 ipa-healthcheck BZ#1852244 ipa BZ#1816784 , BZ#1810154 , BZ#913799, BZ#1651577 , BZ#1851139, BZ#1664719 , BZ#1664718 iperf3 BZ#1665142, BZ#1700497 jss BZ#1821851 kernel-rt BZ#1818138 kernel BZ#1758323, BZ#1812666, BZ#1793389, BZ#1694705, BZ#1748451, BZ#1654962, BZ#1792125, BZ#1708456, BZ#1812577, BZ#1757933, BZ#1847837, BZ#1791664, BZ#1666538, BZ#1602962, BZ#1609288, BZ#1730502, BZ#1806882, BZ#1660290, BZ#1846838, BZ#1865745, BZ#1868526, BZ#1884857, BZ#1854037, BZ#1876527, BZ#1876519, BZ#1823764 , BZ#1822085, BZ#1735611, BZ#1281843, BZ#1828642, BZ#1825414, BZ#1761928, BZ#1791041, BZ#1796565, BZ#1834769, BZ#1785660, BZ#1683394, BZ#1817752, BZ#1782831, BZ#1821646, BZ#1519039, BZ#1627455, BZ#1501618, BZ#1495358, BZ#1633143, BZ#1503672, BZ#1570255, BZ#1696451, BZ#1348508, BZ#1778762 , BZ#1839311, BZ#1783396, BZ#1665295, BZ#1658840, BZ#1660627, BZ#1569610 krb5 BZ#1791062, BZ#1784655 , BZ#1820311 , BZ#1802334, BZ#1877991 libbpf BZ#1759154 libcap BZ#1487388 libdb BZ#1670768 libffi BZ#1723951 libgnome-keyring BZ#1607766 libkcapi BZ#1683123 libmaxminddb BZ#1642001 libpcap BZ#1806422 libreswan BZ#1544463 , BZ#1820206 libseccomp BZ#1770693 libselinux-python-2.8-module BZ#1666328 libssh BZ#1804797 libvirt BZ#1664592, BZ#1528684 lldb BZ#1841073 llvm-toolset BZ#1820587 llvm BZ#1820319 lshw BZ#1794049 lvm2 BZ#1496229, BZ#1768536 , BZ#1598199, BZ#1541165, JIRA:RHELPLAN-39320 mariadb BZ#1942330 memcached BZ#1809536 mesa BZ#1886147 microdnf BZ#1781126 mod_http2 BZ#1814236 nfs-utils BZ#1817756 , BZ#1592011 nginx BZ#1668717 , BZ#1826632 nmstate BZ#1674456 nss_nis BZ#1803161 nss BZ#1817533 , BZ#1645153 opencryptoki BZ#1780293 openmpi BZ#1866402 opensc BZ#1810660 openscap BZ#1803116 , BZ#1870087 , BZ#1795563 , BZ#1824152 , BZ#1829761 openssh BZ#1744108 openssl BZ#1685470, BZ#1810911 oscap-anaconda-addon BZ#1816199, BZ#1665082, BZ#1674001 , BZ#1691305, BZ#1787156 , BZ#1843932 , BZ#1834716 pacemaker BZ#1828488 , BZ#1784601 , BZ#1837747, BZ#1718324 papi BZ#1807346, BZ#1664056 , BZ#1726070 pcp-container BZ#1497296 pcp BZ#1792971 pcs BZ#1817547, BZ#1684676 , BZ#1839637 , BZ#1619620 perl-5.30-module BZ#1713592 perl-IO-Socket-SSL BZ#1824222 perl-libwww-perl BZ#1781177 php BZ#1797661 pki-core BZ#1729215 , BZ#1868233 , BZ#1770322, BZ#1824948 podman BZ#1804193 , BZ#1881894 , BZ#1627899 powertop BZ#1783110 pykickstart BZ#1637872 python38 BZ#1847416 qemu-kvm BZ#1719687 , BZ#1860743 , JIRA:RHELPLAN-45901, BZ#1651994 rear BZ#1843809 , BZ#1729502 , BZ#1743303 redhat-support-tool BZ#1802026 resource-agents BZ#1814896 rhel-system-roles-sap BZ#1844190 , BZ#1660832 rhel-system-roles BZ#1889468 , BZ#1822158 , BZ#1677739 rpm BZ#1688849 rsyslog BZ#1659383, JIRA:RHELPLAN-10431, BZ#1679512 , BZ#1713427 ruby-2.7-module BZ#1817135 ruby BZ#1846113 rust-toolset BZ#1820593 samba BZ#1817557 , JIRA:RHELPLAN-13195 scap-security-guide BZ#1843913 , BZ#1858866 , BZ#1750755 , BZ#1760734 , BZ#1832760 , BZ#1815007 scap-workbench BZ#1640715 selinux-policy BZ#1826788 , BZ#1746398, BZ#1776873 , BZ#1772852 , BZ#1641631, BZ#1860443 setools BZ#1820079 skopeo-container BZ#1627900 smartmontools BZ#1671154 spice BZ#1849563 squid BZ#1829467 sssd BZ#1827615 , BZ#1793727 stratis-cli BZ#1734496 stunnel BZ#1808365 subscription-manager BZ#1674337 sudo BZ#1786990 systemtap BZ#1804319 tang BZ#1716039 tcpdump BZ#1804063 tigervnc BZ#1806992 tpm2-tools BZ#1789682 tuned BZ#1792264 , BZ#1840689 , BZ#1746957 udica BZ#1763210 usbguard BZ#1738590 , BZ#1667395 , BZ#1683567 valgrind BZ#1804324 wayland BZ#1673073 xdp-tools BZ#1880268 , BZ#1820670 xorg-x11-drv-qxl BZ#1642887 xorg-x11-server BZ#1698565 yum BZ#1788154 other JIRA:RHELPLAN-45950, JIRA:RHELPLAN-57572, BZ#1640697, BZ#1659609, BZ#1687900 , BZ#1697896, BZ#1790635, BZ#1823398, BZ#1757877, JIRA:RHELPLAN-25571, BZ#1777138, JIRA:RHELPLAN-27987, JIRA:RHELPLAN-28940, JIRA:RHELPLAN-34199, JIRA:RHELPLAN-57914, BZ#1897383 , BZ#1900019 , BZ#1839151 , BZ#1780124 , JIRA:RHELPLAN-42395, BZ#1889736 , BZ#1842656, JIRA:RHELPLAN-45959, JIRA:RHELPLAN-45958, JIRA:RHELPLAN-45957, JIRA:RHELPLAN-45956, JIRA:RHELPLAN-45952, JIRA:RHELPLAN-45945, JIRA:RHELPLAN-45939, JIRA:RHELPLAN-45937, JIRA:RHELPLAN-45936, JIRA:RHELPLAN-45930, JIRA:RHELPLAN-45926, JIRA:RHELPLAN-45922, JIRA:RHELPLAN-45920, JIRA:RHELPLAN-45918, JIRA:RHELPLAN-45916, JIRA:RHELPLAN-45915, JIRA:RHELPLAN-45911, JIRA:RHELPLAN-45910, JIRA:RHELPLAN-45909, JIRA:RHELPLAN-45908, JIRA:RHELPLAN-45906, JIRA:RHELPLAN-45904, JIRA:RHELPLAN-45900, JIRA:RHELPLAN-45899, JIRA:RHELPLAN-45884, JIRA:RHELPLAN-37573, JIRA:RHELPLAN-37570, JIRA:RHELPLAN-49954, JIRA:RHELPLAN-50002, JIRA:RHELPLAN-43531, JIRA:RHELPLAN-48838, BZ#1873567 , BZ#1866695 , JIRA:RHELPLAN-14068, JIRA:RHELPLAN-7788, JIRA:RHELPLAN-40469, JIRA:RHELPLAN-42617, JIRA:RHELPLAN-30878, JIRA:RHELPLAN-37517, JIRA:RHELPLAN-55009, JIRA:RHELPLAN-42396, BZ#1836211, JIRA:RHELPLAN-57564, JIRA:RHELPLAN-57567, BZ#1890499 , JIRA:RHELPLAN-40234, JIRA:RHELPLAN-56676, JIRA:RHELPLAN-14754, JIRA:RHELPLAN-51289, BZ#1893174 , BZ#1690207, JIRA:RHELPLAN-1212, BZ#1559616, BZ#1889737 , BZ#1812552 , JIRA:RHELPLAN-14047, BZ#1769727 , JIRA:RHELPLAN-27394, JIRA:RHELPLAN-27737, JIRA:RHELPLAN-41549, BZ#1642765, JIRA:RHELPLAN-10304, BZ#1646541, BZ#1647725, BZ#1686057 , BZ#1748980 , BZ#1827628, BZ#1871025 , BZ#1871953 , BZ#1874892, BZ#1893767 , JIRA:RHELPLAN-60226 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.3_release_notes/list_of_tickets_by_component |
Chapter 80. trust | Chapter 80. trust This chapter describes the commands under the trust command. 80.1. trust create Create new trust Usage: Table 80.1. Positional arguments Value Summary <trustor-user> User that is delegating authorization (name or id) <trustee-user> User that is assuming authorization (name or id) Table 80.2. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Project being delegated (name or id) (required) --role <role> Roles to authorize (name or id) (repeat option to set multiple values, required) --impersonate Tokens generated from the trust will represent <trustor> (defaults to False) --expiration <expiration> Sets an expiration date for the trust (format of yyyy- mm-ddTHH:MM:SS) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --trustor-domain <trustor-domain> Domain that contains <trustor> (name or id) --trustee-domain <trustee-domain> Domain that contains <trustee> (name or id) Table 80.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 80.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.2. trust delete Delete trust(s) Usage: Table 80.7. Positional arguments Value Summary <trust> Trust(s) to delete Table 80.8. Command arguments Value Summary -h, --help Show this help message and exit 80.3. trust list List trusts Usage: Table 80.9. Command arguments Value Summary -h, --help Show this help message and exit Table 80.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 80.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 80.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 80.4. trust show Display trust details Usage: Table 80.14. Positional arguments Value Summary <trust> Trust to display Table 80.15. Command arguments Value Summary -h, --help Show this help message and exit Table 80.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 80.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 80.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 80.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack trust create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --project <project> --role <role> [--impersonate] [--expiration <expiration>] [--project-domain <project-domain>] [--trustor-domain <trustor-domain>] [--trustee-domain <trustee-domain>] <trustor-user> <trustee-user>",
"openstack trust delete [-h] <trust> [<trust> ...]",
"openstack trust list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack trust show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <trust>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/trust |
Chapter 3. Integrating with PagerDuty | Chapter 3. Integrating with PagerDuty If you are using PagerDuty , you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to PagerDuty. The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with PagerDuty: Add a new API service in PagerDuty and get the integration key. Use the integration key to set up notifications in Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 3.1. Configuring PagerDuty Start integrating with PagerDuty by creating a new service and by getting the integration key. Procedure Go to Configuration Services . Select Add Services . Under General Settings , specify a Name and Description . Under Integration Setting , click Use our API Directly with Events v2 API selected for the Integration Type drop-down menu. Under Incident Settings , select an Escalation Policy , and configure notification settings and incident timeouts. Accept default settings for Incident Behavior and Alert Grouping , or configure them as required. Click Add Service . From the Service Details page, make note of the Integration Key . 3.2. Configuring Red Hat Advanced Cluster Security for Kubernetes Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the integration key. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select PagerDuty . Click New Integration ( add icon). Enter a name for Integration Name . Enter the integration key in the PagerDuty integration key field. Click Test to validate that the integration with PagerDuty is working. Click Create to create the configuration. 3.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the PagerDuty notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-pagerduty |
Chapter 3. Themes | Chapter 3. Themes Red Hat Single Sign-On provides theme support for web pages and emails. This allows customizing the look and feel of end-user facing pages so they can be integrated with your applications. Figure 3.1. Login page with sunrise example theme 3.1. Theme types A theme can provide one or more types to customize different aspects of Red Hat Single Sign-On. The types available are: Account - Account management Admin - Admin Console Email - Emails Login - Login forms Welcome - Welcome page 3.2. Configuring a theme All theme types, except welcome, are configured through the Admin Console. Procedure Log into the Admin Console. Select your realm from the drop-down box in the top left corner. Click Realm Settings from the menu. Click the Themes tab. Note To set the theme for the master Admin Console you need to set the Admin Console theme for the master realm. To see the changes to the Admin Console refresh the page. Change the welcome theme by editing standalone.xml , standalone-ha.xml , or domain.xml . Add welcomeTheme to the theme element, for example: <theme> ... <welcomeTheme>custom-theme</welcomeTheme> ... </theme> Restart the server for the changes to the welcome theme to take effect. 3.3. Default themes Red Hat Single Sign-On comes bundled with default themes in the server's root themes directory. To simplify upgrading you should not edit the bundled themes directly. Instead create your own theme that extends one of the bundled themes. 3.4. Creating a theme A theme consists of: HTML templates ( Freemarker Templates ) Images Message bundles Stylesheets Scripts Theme properties Unless you plan to replace every single page you should extend another theme. Most likely you will want to extend the Red Hat Single Sign-On theme, but you could also consider extending the base theme if you are significantly changing the look and feel of the pages. The base theme primarily consists of HTML templates and message bundles, while the Red Hat Single Sign-On theme primarily contains images and stylesheets. When extending a theme you can override individual resources (templates, stylesheets, etc.). If you decide to override HTML templates bear in mind that you may need to update your custom template when upgrading to a new release. While creating a theme it's a good idea to disable caching as this makes it possible to edit theme resources directly from the themes directory without restarting Red Hat Single Sign-On. Procedure Edit standalone.xml . For theme set staticMaxAge to -1 and both cacheTemplates and cacheThemes to false : <theme> <staticMaxAge>-1</staticMaxAge> <cacheThemes>false</cacheThemes> <cacheTemplates>false</cacheTemplates> ... </theme> Create a directory in the themes directory. The name of the directory becomes the name of the theme. For example to create a theme called mytheme create the directory themes/mytheme . Inside the theme directory, create a directory for each of the types your theme is going to provide. For example, to add the login type to the mytheme theme, create the directory themes/mytheme/login . For each type create a file theme.properties which allows setting some configuration for the theme. For example, to configure the theme themes/mytheme/login to extend the base theme and import some common resources, create the file themes/mytheme/login/theme.properties with following contents: You have now created a theme with support for the login type. Log into the Admin Console to checkout your new theme Select your realm Click Realm Settings from the menu. Click on the Themes tab. For Login Theme select mytheme and click Save . Open the login page for the realm. You can do this either by logging in through your application or by opening the Account Management console ( /realms/{realm name}/account ). To see the effect of changing the parent theme, set parent=keycloak in theme.properties and refresh the login page. Note Be sure to re-enable caching in production as it will significantly impact performance. 3.4.1. Theme properties Theme properties are set in the file <THEME TYPE>/theme.properties in the theme directory. parent - Parent theme to extend import - Import resources from another theme styles - Space-separated list of styles to include locales - Comma-separated list of supported locales There are a list of properties that can be used to change the css class used for certain element types. For a list of these properties look at the theme.properties file in the corresponding type of the keycloak theme ( themes/keycloak/<THEME TYPE>/theme.properties ). You can also add your own custom properties and use them from custom templates. When doing so, you can substitute system properties or environment variables by using these formats: USD{some.system.property} - for system properties USD{env.ENV_VAR} - for environment variables. A default value can also be provided in case the system property or the environment variable is not found with USD{foo:defaultValue} . Note If no default value is provided and there's no corresponding system property or environment variable, then nothing is replaced and you end up with the format in your template. Here's an example of what is possible: javaVersion=USD{java.version} unixHome=USD{env.HOME:Unix home not found} windowsHome=USD{env.HOMEPATH:Windows home not found} 3.4.2. Add a stylesheet to a theme You can add one or more stylesheets to a theme. Procedure Create a file in the <THEME TYPE>/resources/css directory of your theme. Add this file to the styles property in theme.properties . For example, to add styles.css to the mytheme , create themes/mytheme/login/resources/css/styles.css with the following content: .login-pf body { background: DimGrey none; } Edit themes/mytheme/login/theme.properties and add: To see the changes, open the login page for your realm. You will notice that the only styles being applied are those from your custom stylesheet. To include the styles from the parent theme, load the styles from that theme. Edit themes/mytheme/login/theme.properties and change styles to: Note To override styles from the parent stylesheets, ensure that your stylesheet is listed last. 3.4.3. Adding a script to a theme You can add one or more scripts to a theme. Procedure Create a file in the <THEME TYPE>/resources/js directory of your theme. Add the file to the scripts property in theme.properties . For example, to add script.js to the mytheme , create themes/mytheme/login/resources/js/script.js with the following content: alert('Hello'); Then edit themes/mytheme/login/theme.properties and add: 3.4.4. Adding an image to a theme To make images available to the theme add them to the <THEME TYPE>/resources/img directory of your theme. These can be used from within stylesheets or directly in HTML templates. For example to add an image to the mytheme copy an image to themes/mytheme/login/resources/img/image.jpg . You can then use this image from within a custom stylesheet with: body { background-image: url('../img/image.jpg'); background-size: cover; } Or to use directly in HTML templates add the following to a custom HTML template: <img src="USD{url.resourcesPath}/img/image.jpg"> 3.4.5. Messages Text in the templates is loaded from message bundles. A theme that extends another theme will inherit all messages from the parent's message bundle and you can override individual messages by adding <THEME TYPE>/messages/messages_en.properties to your theme. For example to replace Username on the login form with Your Username for the mytheme create the file themes/mytheme/login/messages/messages_en.properties with the following content: Within a message values like {0} and {1} are replaced with arguments when the message is used. For example {0} in Log in to {0} is replaced with the name of the realm. Texts of these message bundles can be overwritten by realm-specific values. The realm-specific values are manageable via UI and API. 3.4.6. Adding a language to a realm Prerequisites To enable internationalization for a realm, see the Server Administration Guide . Procedure Create the file <THEME TYPE>/messages/messages_<LOCALE>.properties in the directory of your theme. Add this file to the locales property in <THEME TYPE>/theme.properties . For a language to be available to users the realms login , account and email , the theme has to support the language, so you need to add your language for those theme types. For example, to add Norwegian translations to the mytheme theme create the file themes/mytheme/login/messages/messages_no.properties with the following content: If you omit a translation for messages, they will use English. Edit themes/mytheme/login/theme.properties and add: Add the same for the account and email theme types. To do this create themes/mytheme/account/messages/messages_no.properties and themes/mytheme/email/messages/messages_no.properties . Leaving these files empty will result in the English messages being used. Copy themes/mytheme/login/theme.properties to themes/mytheme/account/theme.properties and themes/mytheme/email/theme.properties . Add a translation for the language selector. This is done by adding a message to the English translation. To do this add the following to themes/mytheme/account/messages/messages_en.properties and themes/mytheme/login/messages/messages_en.properties : By default message properties files should be encoded using ISO-8859-1. It's also possible to specify the encoding using a special header. For example to use UTF-8 encoding: Additional resources See Locale Selector for details on how the current locale is selected. 3.4.7. Adding custom Identity Providers icons Red Hat Single Sign-On supports adding icons for custom Identity providers, which are displayed on the login screen. Procedure Define icon classes in your login theme.properties file (for example, themes/mytheme/login/theme.properties ) with key pattern kcLogoIdP-<alias> . For an Identity Provider with an alias myProvider , you may add a line to theme.properties file of your custom theme. For example: All icons are available on the official website of PatternFly4. Icons for social providers are already defined in base login theme properties ( themes/keycloak/login/theme.properties ), where you can inspire yourself. 3.4.8. Creating a custom HTML template Red Hat Single Sign-On uses Apache Freemarker templates to generate HTML. You can override individual templates in your own theme by creating <THEME TYPE>/<TEMPLATE>.ftl . For a list of templates used see themes/base/<THEME TYPE> . Procedure Copy the template from the base theme to your own theme. Apply the modifications you need. For example, to create a custom login form for the mytheme theme, copy themes/base/login/login.ftl to themes/mytheme/login and open it in an editor. After the first line (<#import ... >), add <h1>HELLO WORLD!</h1> as shown here: <#import "template.ftl" as layout> <h1>HELLO WORLD!</h1> ... Back up the modified template. When upgrading to a new version of Red Hat Single Sign-On you may need to update your custom templates to apply changes to the original template if applicable. Additional resources See the FreeMarker Manual for details on how to edit templates. 3.4.9. Emails To edit the subject and contents for emails, for example password recovery email, add a message bundle to the email type of your theme. There are three messages for each email. One for the subject, one for the plain text body and one for the html body. To see all emails available take a look at themes/base/email/messages/messages_en.properties . For example to change the password recovery email for the mytheme theme create themes/mytheme/email/messages/messages_en.properties with the following content: 3.5. Deploying themes Themes can be deployed to Red Hat Single Sign-On by copying the theme directory to themes or it can be deployed as an archive. During development you can copy the theme to the themes directory, but in production you may want to consider using an archive . An archive makes it simpler to have a versioned copy of the theme, especially when you have multiple instances of Red Hat Single Sign-On for example with clustering. Procedure To deploy a theme as an archive, create a JAR archive with the theme resources. Add a file META-INF/keycloak-themes.json to the archive that lists the available themes in the archive as well as what types each theme provides. For example for the mytheme theme create mytheme.jar with the contents: META-INF/keycloak-themes.json theme/mytheme/login/theme.properties theme/mytheme/login/login.ftl theme/mytheme/login/resources/css/styles.css theme/mytheme/login/resources/img/image.png theme/mytheme/login/messages/messages_en.properties theme/mytheme/email/messages/messages_en.properties The contents of META-INF/keycloak-themes.json in this case would be: { "themes": [{ "name" : "mytheme", "types": [ "login", "email" ] }] } A single archive can contain multiple themes and each theme can support one or more types. To deploy the archive to Red Hat Single Sign-On, add it to the standalone/deployments/ directory of Red Hat Single Sign-On and it will be automatically loaded. 3.6. Theme selector By default the theme configured for the realm is used, with the exception of clients being able to override the login theme. This behavior can be changed through the Theme Selector SPI. This could be used to select different themes for desktop and mobile devices by looking at the user agent header, for example. To create a custom theme selector you need to implement ThemeSelectorProviderFactory and ThemeSelectorProvider . 3.7. Theme resources When implementing custom providers in Red Hat Single Sign-On there may often be a need to add additional templates, resources and messages bundles. The easiest way to load additional theme resources is to create a JAR with templates in theme-resources/templates resources in theme-resources/resources and messages bundles in theme-resources/messages . If you want a more flexible way to load templates and resources that can be achieved through the ThemeResourceSPI. By implementing ThemeResourceProviderFactory and ThemeResourceProvider you can decide exactly how to load templates and resources. 3.8. Locale selector By default, the locale is selected using the DefaultLocaleSelectorProvider which implements the LocaleSelectorProvider interface. English is the default language when internationalization is disabled. With internationalization enabled, the locale is resolved according to the logic described in the Server Administration Guide . This behavior can be changed through the LocaleSelectorSPI by implementing the LocaleSelectorProvider and LocaleSelectorProviderFactory . The LocaleSelectorProvider interface has a single method, resolveLocale , which must return a locale given a RealmModel and a nullable UserModel . The actual request is available from the KeycloakSession#getContext method. Custom implementations can extend the DefaultLocaleSelectorProvider in order to reuse parts of the default behavior. For example to ignore the Accept-Language request header, a custom implementation could extend the default provider, override it's getAcceptLanguageHeaderLocale , and return a null value. As a result the locale selection will fall back on the realms's default language. 3.9. Additional resources For more details on creating and deploying a custom provider, see Service Provider Interfaces . | [
"<theme> <welcomeTheme>custom-theme</welcomeTheme> </theme>",
"<theme> <staticMaxAge>-1</staticMaxAge> <cacheThemes>false</cacheThemes> <cacheTemplates>false</cacheTemplates> </theme>",
"parent=base import=common/keycloak",
"javaVersion=USD{java.version} unixHome=USD{env.HOME:Unix home not found} windowsHome=USD{env.HOMEPATH:Windows home not found}",
".login-pf body { background: DimGrey none; }",
"styles=css/styles.css",
"styles=web_modules/@fontawesome/fontawesome-free/css/icons/all.css web_modules/@patternfly/react-core/dist/styles/base.css web_modules/@patternfly/react-core/dist/styles/app.css node_modules/patternfly/dist/css/patternfly.min.css node_modules/patternfly/dist/css/patternfly-additions.min.css css/login.css css/styles.css",
"alert('Hello');",
"scripts=js/script.js",
"body { background-image: url('../img/image.jpg'); background-size: cover; }",
"<img src=\"USD{url.resourcesPath}/img/image.jpg\">",
"usernameOrEmail=Your Username",
"usernameOrEmail=Brukernavn password=Passord",
"locales=en,no",
"locale_no=Norsk",
"encoding: UTF-8 usernameOrEmail=.",
"kcLogoIdP-myProvider = fa fa-lock",
"<#import \"template.ftl\" as layout> <h1>HELLO WORLD!</h1>",
"passwordResetSubject=My password recovery passwordResetBody=Reset password link: {0} passwordResetBodyHtml=<a href=\"{0}\">Reset password</a>",
"{ \"themes\": [{ \"name\" : \"mytheme\", \"types\": [ \"login\", \"email\" ] }] }"
]
| https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_developer_guide/themes |
Chapter 4. Configuring Satellite Server with External Services | Chapter 4. Configuring Satellite Server with External Services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP and TFTP services. 4.1. Configuring Satellite Server with External DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Satellite Server with External DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an External DHCP Server to Use with Satellite Server" Section 4.2.2, "Configuring Satellite Server with an External DHCP Server" 4.2.1. Configuring an External DHCP Server to Use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) or its utility packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, clients fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and BIND packages or its utility packages depending on your host version. For Red Hat Enterprise Linux 7 host: For Red Hat Enterprise Linux 8 host: Generate a security token: As a result, a key pair that consists of two files is created in the current directory. Copy the secret hash from the key: Edit the dhcpd configuration file for all subnets and add the key. The following is an example: Note that the option routers value is the Satellite or Capsule IP address that you want to use with an external DHCP service. Delete the two key files from the directory that they were created in. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. 4.2.2. Configuring Satellite Server with an External DHCP Server You can configure Satellite Server with an external DHCP server. Prerequisite Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 4.2.1, "Configuring an External DHCP Server to Use with Satellite Server" . Procedure Install the nfs-utils utility: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DHCP service with the appropriate subnets and domain. 4.3. Configuring Satellite Server with External TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.4. Configuring Satellite Server with External IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.4.1, "Configuring Dynamic DNS Update with GSS-TSIG Authentication" Section 4.4.2, "Configuring Dynamic DNS Update with TSIG Authentication" To revert to internal DNS service, use the following procedure: Section 4.4.3, "Reverting to Internal DNS Service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see External Authentication for Provisioned Hosts in the Administering Red Hat Satellite guide. 4.4.1. Configuring Dynamic DNS Update with GSS-TSIG Authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos Principal on the IdM Server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server. Installing and Configuring the IdM Client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS Zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that Manages the DNS Service for the Domain Use the satellite-installer command to configure the Satellite or Capsule that manages the DNS Service for the domain: On Satellite, enter the following command: On Capsule, enter the following command: After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.4.2. Configuring Dynamic DNS Update with TSIG Authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling External Updates to the DNS Zone in the IdM Server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing External Updates to the DNS Zone in the IdM Server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.4.3. Reverting to Internal DNS Service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS Server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information,see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the Configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"yum install dhcp bind",
"yum install dhcp-server bind-utils",
"dnssec-keygen -a HMAC-MD5 -b 512 -n HOST omapi_key",
"grep ^Key Komapi_key.+*.private | cut -d ' ' -f2",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm HMAC-MD5; secret \"jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw==\"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp && firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl start dhcpd",
"yum install nfs-utils systemctl enable rpcbind nfs-server systemctl start rpcbind nfs-server nfs-lock nfs-idmapd",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp firewall-cmd --runtime-to-permanent",
"firewall-cmd --zone public --add-service mountd && firewall-cmd --zone public --add-service rpc-bind && firewall-cmd --zone public --add-service nfs && firewall-cmd --runtime-to-permanent",
"yum install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash bash-4.2USD cat /mnt/nfs/etc/dhcp/dhcpd.conf bash-4.2USD cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases bash-4.2USD exit",
"satellite-installer --foreman-proxy-dhcp=true --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret=jNSE5YI3H1A8Oj/tkV4...A2ZOHb6zv315CkNAY7DMYYCj48Umw== --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911 --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-server= DHCP_Server_FQDN",
"systemctl restart foreman-proxy",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp=true --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule/047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"satellite-installer --scenario capsule --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --scenario satellite --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-keyfile=/etc/rndc.key --foreman-proxy-dns-ttl=86400",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_disconnected_network_environment/configuring-external-services |
Creating and Managing Instances | Creating and Managing Instances Red Hat OpenStack Platform 17.0 Creating and managing instances using the CLI OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/index |
Chapter 5. ConfigMap [v1] | Chapter 5. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binaryData object (string) BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet. data object (string) Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. immutable boolean Immutable, if set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 5.2. API endpoints The following API endpoints are available: /api/v1/configmaps GET : list or watch objects of kind ConfigMap /api/v1/watch/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps DELETE : delete collection of ConfigMap GET : list or watch objects of kind ConfigMap POST : create a ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps/{name} DELETE : delete a ConfigMap GET : read the specified ConfigMap PATCH : partially update the specified ConfigMap PUT : replace the specified ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps/{name} GET : watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/configmaps Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ConfigMap Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/configmaps Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/configmaps Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConfigMap Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ConfigMap Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty HTTP method POST Description create a ConfigMap Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body ConfigMap schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 202 - Accepted ConfigMap schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/configmaps Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/configmaps/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the ConfigMap namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConfigMap Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConfigMap Table 5.23. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConfigMap Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConfigMap Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body ConfigMap schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/configmaps/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the ConfigMap namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/configmap-v1 |
4.4. Cache Entry Expiration Behavior | 4.4. Cache Entry Expiration Behavior Red Hat JBoss Data Grid does not guarantee that an entry is removed immediately upon timeout. Instead, a number of mechanisms are used in collaboration to ensure efficient removal. An expired entry is removed from the cache when either: An entry is passivated/overflowed to disk and is discovered to have expired. The expiration maintenance thread discovers that an entry it has found is expired. If a user requests an entry that is expired but not yet removed, a null value is sent to the user. This mechanism ensures that the user never receives an expired entry. The entry is eventually removed by the expiration thread. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/cache_entry_expiration_notifications |
Chapter 1. Overview | Chapter 1. Overview AMQ JMS Pool is a library that provides caching of JMS connections, sessions, and message producers. It enables reuse of connection resources beyond the standard lifecycle defined by the JMS API. AMQ JMS Pool operates as a standard JMS ConnectionFactory instance that wraps the ConnectionFactory of your chosen JMS provider and manages the lifetime of Connection objects from that provider based on the configuration of the JMS pool. It can be configured to share one or more connections among callers to the pool createConnection() methods. AMQ JMS Pool is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ JMS Pool is based on the Pooled JMS messaging library. 1.1. Key features JMS 1.1 and 2.0 compatible Automatic reconnect Configurable connection and session pool sizes 1.2. Supported standards and protocols AMQ JMS Pool supports version 2.0 of the Java Message Service API. 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ JMS Pool supported configurations. 1.4. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_pool_library/overview |
Chapter 4. RPM installation of JBoss EAP | Chapter 4. RPM installation of JBoss EAP You can install JBoss EAP using RPM packages on supported installations of Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7, and Red Hat Enterprise Linux 8. 4.1. Subscribing to a minor JBoss EAP repository To install JBoss EAP with RPM, you need a subscription to both the Red Hat Enterprise Linux Server base software repository, as well as a minor JBoss EAP repository. For the JBoss EAP repository, you must subscribe to a minor JBoss EAP repository. A minor repository provides a specific minor release of JBoss EAP 7 and all applicable patches. This allows you to maintain the same minor version of JBoss EAP, while staying current with high severity and security patches. For example, updating from this repository includes patches and security updates for the minor JBoss EAP version, but does not include upgrades from JBoss EAP 7.4 to JBoss EAP 7.5. Prerequisites Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information, see the Red Hat Subscription Management documentation . Procedure Enter the Red Hat Subscription Manager. Replace EAP_MINOR_VERSION with your intended JBoss EAP minor version. For example, 7.4 . For Red Hat Enterprise Linux 6 and 7, replace RHEL_VERSION with either 6 or 7 depending on your Red Hat Enterprise Linux version: For Red Hat Enterprise Linux 8, use the following command: 4.2. Installing the JBoss EAP RPM installation on RHEL Choose the Red Hat Packet Manager (RPM) to install a minor version of JBoss EAP. Conversely, you can subscribe to the current JBoss EAP using this method. A minor version of JBoss EAP provides a specific minor release and all applicable patches. When you subscribe to a minor version of JBoss EAP, you can remain up-to-date with high severity and security patches. Prerequisites Set up an account on the Red Hat Customer Portal . Review the JBoss EAP 7 supported configurations and ensure your system is supported. Download the JBoss EAP installation package. Register to the Red Hat Enterprise Linux server using Red Hat Subscription Manager. Install a supported Java Development Kit (JDK). Procedure Install JBoss EAP and JDK 8. Install JBoss EAP and JDK 11. JDK 11 is available for Red Hat Enterprise Linux 7 and later. Red Hat Enterprise Linux 7: Red Hat Enterprise Linux 8: Note The groupinstall command installs the specified version of JDK if that version of JDK is not installed on the system. If a different version of JDK already exists, the system contains multiple JDKs installed after the command is executed. If there are multiple JDKs installed on your system after groupinstall is complete, check which JDK is used for JBoss EAP execution. By default, the system default JDK is used. You can modify the default by either of the following ways: Change system wide configuration using the alternatives command: The command displays a list of installed JDKs and instructions for setting a specific JDK as the default. Change the JDK used by JBoss EAP by using the JAVA_HOME property. Your installation is complete. The default EAP_HOME path for the RPM installation is /opt/rh/eap7/root/usr/share/wildfly . Important When using the RPM installer to install JBoss EAP, configuring multiple domain or host controllers on the same machine is not supported. Additional resources See Setting up the EAP_HOME variable, in the JBoss EAP Installation Guide . See Subscribing to a Minor JBoss EAP repository, in the JBoss EAP Installation Guide . For more information about changing the JAVA_HOME property, see the RPM Service Configuration Properties section of the Configuration Guide. 4.3. Changing repositories Over the lifespan of a JBoss EAP installation, you may want to change the software subscription from one JBoss EAP repository to another. Changing repositories is supported, but only within the following conditions: Changing from the current repository to a minor repository is supported if changing to the latest minor repository. Changing from a minor repository to another minor repository is supported if changing to the minor JBoss EAP version. For example, changing from JBoss EAP 7.0 to JBoss EAP 7.1 is supported, but changing from JBoss EAP 7.0 to JBoss EAP 7.2 is not supported. Important The JBoss EAP current repository is no longer available as of JBoss EAP 7.3. If you subscribed to the current repository for a release of JBoss EAP, you must change your subscription to a minor repository for this release of JBoss EAP. Prerequisites Install JBoss EAP using the RPM installation. Choose a repository. Ensure that the JBoss EAP installation has all applicable updates applied. Issue the following command on your terminal to apply the updates: Comply with the supported change conditions shown above. Procedure Using Red Hat Subscription Manager, unsubscribe from the existing repository and subscribe to the new repository. In the command below, replace EXISTING_REPOSITORY and NEW_REPOSITORY with the respective repository names. 4.4. Configuring JBoss EAP RPM installation as a service on RHEL You can configure the Red Hat Packet Manager (RPM) installation to run as a service in Red Hat Enterprise Linux (RHEL). An RPM installation of JBoss EAP installs everything that is required to run JBoss EAP as a service Run the appropriate command for your RHEL, as demonstrated in this procedure. Replace EAP_SERVICE_NAME with either eap7-standalone for a standalone JBoss EAP server, or eap7-domain for a managed domain. Important You cannot configure more than one JBoss EAP instance as a service on a single machine. Prerequisites Install JBoss EAP as an RPM installation. Ensure that you have administrator privileges on the server. Procedure For Red Hat Enterprise Linux 6: For Red Hat Enterprise Linux 7 and later: Additional resources To start or stop an RPM installation of JBoss EAP on demand, see the RPM instructions in the JBoss EAP Configuration Guide . See the RPM service configuration files appendix in the JBoss EAP Configuration Guide for further details and options. 4.5. Installing JBoss EAP RPM installation by using Jsvc You can use the Apache Java Service (Jsvc) component of the JBoss Core Services collection to run JBoss EAP as a detached process, daemon, on Red Hat Enterprise Linux (RHEL). You would usually use Jsvc to run JBoss EAP on Windows or Solaris. For best product performance, use the native method for running JBoss EAP as a service on your version of RHEL. Jsvc is a set of libraries and applications that provide Java applications the ability to run as a background service. Applications run using Jsvc can perform operations as a privileged user and then switch identity to a non-privileged user. Prerequisites Install JBoss EAP as an RPM installation. Ensure that you have administrator privileges on the server. Procedure Log in to the Red Hat Customer Portal . Click on Systems in the Subscriber Inventory . Subscribe to the JBoss Core Services CDN repositories for your operating system version and architecture: For Red Hat Enterprise Linux 6: For Red Hat Enterprise Linux 7 and later: Run the following command as the root user to install Apache Jsvc: Additional resources To learn more about controlling JBoss Core Services, see Apache HTTP Server Installation Guide . For information about installing JBoss Core Services on RHEL, see Installing JBoss Core Services Apache HTTP Server on Red Hat Enterprise Linux in the Apache HTTP Server Installation Guide . For information about installing JBoss Core Services on Windows, see Installing JBoss Core Services Apache HTTP Server on Windows in the Apache HTTP Server Installation Guide . For information about installing JBoss Core Services on Solaris, see Installing Apache HTTP Server on Solaris in the Apache HTTP Server Installation Guide . 4.6. Jsvc commands to start or stop JBoss EAP as a standalone server Using Java Service (Jsvc), you can enter various commands for starting or stopping JBoss EAP. The following table shows a list of paths that are needed for the commands for an archive JBoss EAP installation. Table 4.1. File Locations of Paths File Reference in Instructions File Location JSVC_BIN /usr/bin/jbcs-jsvc/jsvc JSVC_JAR /usr/bin/jbcs-jsvc/commons-daemon.jar CONF_DIR /opt/rh/eap7/root/usr/share/wildfly/standalone/configuration LOG_DIR /opt/rh/eap7/root/usr/share/wildfly/standalone/log The following example demonstrates starting a JBoss EAP standalone server using Jsvc with a JSVC_BIN \ path: The following example demonstrates stopping a JBoss EAP standalone server using Jsvc with a JSVC_BIN \ path: 4.7. Jsvc commands to start or stop JBoss EAP as a managed domain Using Java Service (Jsvc), you can enter various commands for starting or stopping JBoss EAP. The following table shows a list of paths that are needed for the commands for an archive JBoss EAP installation. Table 4.2. File Locations of Paths File Reference in Instructions File Location JSVC_BIN /usr/bin/jbcs-jsvc/jsvc JSVC_JAR /usr/bin/jbcs-jsvc/commons-daemon.jar CONF_DIR /opt/rh/eap7/root/usr/share/wildfly/domain/configuration LOG_DIR /opt/rh/eap7/root/usr/share/wildfly/domain/log The following example demonstrates starting a JBoss EAP domain server using Jsvc with a JSVC_BIN \ path. Before you issue the following command, set the JAVA_HOME system environment variable. The following example demonstrates stopping a JBoss EAP domain server using Jsvc with a JSVC_BIN \ path. 4.8. Uninstalling a JBoss EAP RPM installation Warning Uninstalling a JBoss EAP installation that is using the RPM method is not recommended. Because of the nature of RPM package management, it cannot be guaranteed that all installed packages and dependencies are completely removed, or that the system is not left in an inconsistent state caused by missing package dependencies. Revised on 2024-01-17 05:25:15 UTC | [
"subscription-manager repos --enable=jb-eap- EAP_MINOR_VERSION -for-rhel- RHEL_VERSION -server-rpms",
"subscription-manager repos --enable=jb-eap- EAP_MINOR_VERSION -for-rhel- RHEL_VERSION -ARCH-rpms",
"yum groupinstall jboss-eap7",
"yum groupinstall jboss-eap7-jdk11",
"dnf groupinstall jboss-eap7-jdk11",
"alternatives --config java",
"yum update",
"subscription-manager repos --disable= EXISTING_REPOSITORY --enable= NEW_REPOSITORY",
"chkconfig EAP_SERVICE_NAME on",
"systemctl enable EAP_SERVICE_NAME .service",
"jb-coreservices-1-for-rhel-6-server-rpms",
"jb-coreservices-1-for-rhel-7-server-rpms",
"yum groupinstall jbcs-jsvc",
"JSVC_BIN -outfile LOG_DIR /jsvc.out.log -errfile LOG_DIR /jsvc.err.log -pidfile LOG_DIR /jsvc.pid -user jboss -D[Standalone] -XX:+UseCompressedOops -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file= LOG_DIR /server.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -cp EAP_HOME /jboss-modules.jar: JSVC_JAR -Djboss.home.dir= EAP_HOME -Djboss.server.base.dir= EAP_HOME /standalone @org.jboss.modules.Main -start-method main -mp EAP_HOME /modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone",
"JSVC_BIN -stop -outfile LOG_DIR /jsvc.out.log -errfile LOG_DIR /jsvc.err.log -pidfile LOG_DIR /jsvc.pid -user jboss -D[Standalone] -XX:+UseCompressedOops -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file= LOG_DIR /server.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -cp EAP_HOME /jboss-modules.jar: JSVC_JAR -Djboss.home.dir= EAP_HOME -Djboss.server.base.dir= EAP_HOME /standalone @org.jboss.modules.Main -start-method main -mp EAP_HOME /modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone",
"JSVC_BIN -outfile LOG_DIR /jsvc.out.log -errfile LOG_DIR /jsvc.err.log -pidfile LOG_DIR /jsvc.pid -user jboss -nodetach -D\"[Process Controller]\" -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file= LOG_DIR /process-controller.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -cp \" EAP_HOME /jboss-modules.jar: JSVC_JAR \" org.apache.commons.daemon.support.DaemonWrapper -start org.jboss.modules.Main -start-method main -mp EAP_HOME /modules org.jboss.as.process-controller -jboss-home EAP_HOME -jvm \"USD{JAVA_HOME}\"/bin/java -mp EAP_HOME /modules -- -Dorg.jboss.boot.log.file= LOG_DIR /host-controller.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -- -default-jvm \"USD{JAVA_HOME}\"/bin/java \\",
"JSVC_BIN -stop -outfile LOG_DIR /jsvc.out.log -errfile LOG_DIR /jsvc.err.log -pidfile LOG_DIR /jsvc.pid -user jboss -nodetach -D\"[Process Controller]\" -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file= LOG_DIR /process-controller.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -cp \" EAP_HOME /jboss-modules.jar: JSVC_JAR \" org.apache.commons.daemon.support.DaemonWrapper -start org.jboss.modules.Main -start-method main -mp EAP_HOME /modules org.jboss.as.process-controller -jboss-home EAP_HOME -jvm USDJAVA_HOME/bin/java -mp EAP_HOME /modules -- -Dorg.jboss.boot.log.file= LOG_DIR /host-controller.log -Dlogging.configuration=file: CONF_DIR /logging.properties -Djboss.modules.policy-permissions -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -- -default-jvm USDJAVA_HOME/bin/java"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installation_guide/assembly-rpm-installation-of-jboss-eap_default |
Part VI. Set Up Locking for the Cache | Part VI. Set Up Locking for the Cache | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-Set_Up_Locking_for_the_Cache |
Red Hat Ansible Automation Platform Installation Guide | Red Hat Ansible Automation Platform Installation Guide Red Hat Ansible Automation Platform 2.3 Learn how to install Red Hat Ansible Automation Platform based on supported installation scenarios. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/index |
Chapter 2. Using Control Groups | Chapter 2. Using Control Groups The following sections provide an overview of tasks related to creation and management of control groups. This guide focuses on utilities provided by systemd that are preferred as a way of cgroup management and will be supported in the future. versions of Red Hat Enterprise Linux used the libcgroup package for creating and managing cgroups. This package is still available to assure backward compatibility (see Warning ), but it will not be supported in future versions of Red Hat Enterprise Linux. 2.1. Creating Control Groups From the systemd 's perspective, a cgroup is bound to a system unit configurable with a unit file and manageable with systemd's command-line utilities. Depending on the type of application, your resource management settings can be transient or persistent . To create a transient cgroup for a service, start the service with the systemd-run command. This way, it is possible to set limits on resources consumed by the service during its runtime. Applications can create transient cgroups dynamically by using API calls to systemd . See the section called "Online Documentation" for API reference. Transient unit is removed automatically as soon as the service is stopped. To assign a persistent cgroup to a service, edit its unit configuration file. The configuration is preserved after the system reboot, so it can be used to manage services that are started automatically. Note that scope units cannot be created in this way. 2.1.1. Creating Transient Cgroups with systemd-run The systemd-run command is used to create and start a transient service or scope unit and run a custom command in the unit. Commands executed in service units are started asynchronously in the background, where they are invoked from the systemd process. Commands run in scope units are started directly from the systemd-run process and thus inherit the execution environment of the caller. Execution in this case is synchronous. To run a command in a specified cgroup, type as root : The name stands for the name you want the unit to be known under. If --unit is not specified, a unit name will be generated automatically. It is recommended to choose a descriptive name, since it will represent the unit in the systemctl output. The name has to be unique during runtime of the unit. Use the optional --scope parameter to create a transient scope unit instead of service unit that is created by default. With the --slice option, you can make your newly created service or scope unit a member of a specified slice. Replace slice_name with the name of an existing slice (as shown in the output of systemctl -t slice ), or create a new slice by passing a unique name. By default, services and scopes are created as members of the system.slice . Replace command with the command you wish to execute in the service unit. Place this command at the very end of the systemd-run syntax, so that the parameters of this command are not confused for parameters of systemd-run . Besides the above options, there are several other parameters available for systemd-run . For example, --description creates a description of the unit, --remain-after-exit allows to collect runtime information after terminating the service's process. The --machine option executes the command in a confined container. See the systemd-run (1) manual page to learn more. Example 2.1. Starting a New Service with systemd-run Use the following command to run the top utility in a service unit in a new slice called test . Type as root : The following message is displayed to confirm that you started the service successfully: Now, the name toptest.service can be used to monitor or to modify the cgroup with systemctl commands. 2.1.2. Creating Persistent Cgroups To configure a unit to be started automatically on system boot, execute the systemctl enable command (see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide ). Running this command automatically creates a unit file in the /usr/lib/systemd/system/ directory. To make persistent changes to the cgroup, add or modify configuration parameters in its unit file. For more information, see Section 2.3.2, "Modifying Unit Files" . | [
"~]# systemd-run --unit= name --scope --slice= slice_name command",
"~]# systemd-run --unit= toptest --slice= test top -b",
"Running as unit toptest.service"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/chap-using_control_groups |
10.10. Automating the Installation with Kickstart | 10.10. Automating the Installation with Kickstart Red Hat Enterprise Linux 7 offers a way to partially or fully automate the installation process using a Kickstart file . Kickstart files contain answers to all questions normally asked by the installation program, such as what time zone do you want the system to use, how should the drives be partitioned or which packages should be installed. Providing a prepared Kickstart file at the beginning of the installation therefore allows you to perform the entire installation (or parts of it) automatically, without need for any intervention from the user. This is especially useful when deploying Red Hat Enterprise Linux on a large number of systems at once. In addition to allowing you to automate the installation, Kickstart files also provide more options regarding software selection. When installing Red Hat Enterprise Linux manually using the graphical installation interface, your software selection is limited to pre-defined environments and add-ons. A Kickstart file allows you to install or remove individual packages as well. For instructions about creating a Kickstart file and using it to automate the installation, see Chapter 27, Kickstart Installations . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-kickstart-ppc |
Chapter 1. Making open source more inclusive | Chapter 1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.0/making-open-source-more-inclusive |
20.2. Operating System Booting | 20.2. Operating System Booting There are a number of different ways to boot virtual machines each with their own pros and cons. Each one is described in the sub-sections that follow and include: BIOS boot loader, host physical machine boot loader, and direct kernel boot. 20.2.1. BIOS Boot loader Booting through the BIOS is available for hypervisors supporting full virtualization. In this case the BIOS has a boot order priority (floppy, harddisk, cdrom, network) determining where to obtain/find the boot image. The OS section of the domain XML contains the information as follows: ... <os> <type>hvm</type> <loader>/usr/lib/xen/boot/hvmloader</loader> <boot dev='hd'/> <boot dev='cdrom'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os> ... Figure 20.2. BIOS boot loader domain XML The components of this section of the domain XML are as follows: Table 20.2. BIOS boot loader elements Element Description <type> Specifies the type of operating system to be booted on the guest virtual machine. hvm indicates that the OS is one designed to run on bare metal, so requires full virtualization. linux refers to an OS that supports the Xen 3 hypervisor guest ABI. There are also two optional attributes, arch specifying the CPU architecture to virtualization, and machine referring to the machine type. Refer to Driver Capabilities for more information. <loader> refers to a piece of firmware that is used to assist the domain creation process. It is only needed for using Xen fully virtualized domains. <boot> takes one of the values: fd , hd , cdrom or network and is used to specify the boot device to consider. The boot element can be repeated multiple times to setup a priority list of boot devices to try in turn. Multiple devices of the same type are sorted according to their targets while preserving the order of buses. After defining the domain, its XML configuration returned by libvirt (through virDomainGetXMLDesc) lists devices in the sorted order. Once sorted, the first device is marked as bootable. For more information see BIOS bootloader . <bootmenu> determines whether or not to enable an interactive boot menu prompt on guest virtual machine startup. The enable attribute can be either yes or no . If not specified, the hypervisor default is used <smbios> determines how SMBIOS information is made visible in the guest virtual machine. The mode attribute must be specified, as either emulate (lets the hypervisor generate all values), host (copies all of Block 0 and Block 1, except for the UUID, from the host physical machine's SMBIOS values; the virConnectGetSysinfo call can be used to see what values are copied), or sysinfo (uses the values in the sysinfo element). If not specified, the hypervisor default setting is used. <bios> This element has attribute useserial with possible values yes or no . The attribute enables or disables Serial Graphics Adapter which allows users to see BIOS messages on a serial port. Therefore, one needs to have serial port defined. Note there is another attribute, rebootTimeout that controls whether and after how long the guest virtual machine should start booting again in case the boot fails (according to BIOS). The value is in milliseconds with maximum of 65535 and special value -1 disables the reboot. | [
"<os> <type>hvm</type> <loader>/usr/lib/xen/boot/hvmloader</loader> <boot dev='hd'/> <boot dev='cdrom'/> <bootmenu enable='yes'/> <smbios mode='sysinfo'/> <bios useserial='yes' rebootTimeout='0'/> </os>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-dom-xml-op-sys-boot |
7.123. man-pages-overrides | 7.123. man-pages-overrides 7.123.1. RHBA-2015:1295 - man-pages-overrides bug fix update An updated man-pages-overrides package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The man-pages-overrides package provides a collection of manual (man) pages to complement other packages or update those contained therein. Bug Fixes BZ# 1205351 Previously, the eventfd(2) manual page did not describe the EFD_SEMAPHORE flag, although the kernel supported this feature. This update adds the missing details about EFD_SEMAPHORE to eventfd(2). BZ# 1207200 The yum-security(8) manual page contained insufficient information about package selection mechanism of the "update-minimum" command with the "--advisory" option. This update adds a more detailed explanation of this process, including an example syntax. BZ# 1140473 Previously, the description of the %util field in the iostat(1) and sar(1) manual pages was incorrect. The description of %util has been fixed, and documentation of the iostat and sar commands is now correct. BZ# 1205377 The pthread_kill(3) manual page contained incorrect information about a possibility to use the pthread_kill() function to check for the existence of a thread ID. Consequently, following this instruction led to a segmentation fault in case of a non-existent thread ID. The misleading piece of information has been removed and pthread_kill(3) now includes more details about handling of non-existent thread IDs. BZ# 1159335 Previously, the statfs struct section in the statfs(2) manual page did not mention the "f_flags" and "f_spare" fields. This update adds the missing fields to statfs(2). BZ# 1121700 The reposync(1) manual page did not contain descriptions of the "e", "d", "m", and "norepopath" options. With this update, reposync(1) provides the complete list of options and their descriptions. BZ# 1159842 Prior to this update, certain manual pages in Russian language were incorrectly encoded. As a consequence, users were unable to read such man pages. This bug has been fixed, and man pages are displayed in the correct encoding. Users of man-pages-overrides are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-man-pages-overrides |
Chapter 29. Kubernetes NMState | Chapter 29. Kubernetes NMState 29.1. About the Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. Important Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power, IBM Z, IBM(R) LinuxONE, VMware vSphere, and OpenStack installations. Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator. Note The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the br-ex bridge. OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. 29.1.1. Installing the Kubernetes NMState Operator You can install the Kubernetes NMState Operator by using the web console or the CLI. 29.1.1.1. Installing the Kubernetes NMState Operator using the web console You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You are logged in as a user with cluster-admin privileges. Procedure Select Operators OperatorHub . In the search field below All Items , enter nmstate and click Enter to search for the Kubernetes NMState Operator. Click on the Kubernetes NMState Operator search result. Click on Install to open the Install Operator window. Click Install to install the Operator. After the Operator finishes installing, click View Operator . Under Provided APIs , click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate . In the Name field of the dialog box, ensure the name of the instance is nmstate. Note The name restriction is a known issue. The instance is a singleton for the entire cluster. Accept the default settings and click Create to create the instance. Summary Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. 29.1.1.2. Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI ( oc) . After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create the nmstate Operator namespace: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF Create the OperatorGroup : USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF Subscribe to the nmstate Operator: USD cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Create instance of the nmstate operator: USD cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF Verification Confirm that the deployment for the nmstate operator is running: oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase kubernetes-nmstate-operator.4.13.0-202210210157 Succeeded 29.1.2. Uninstalling the Kubernetes NMState Operator You can use the Operator Lifecycle Manager (OLM) to uninstall the Kubernetes NMState Operator, but by design OLM does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services. Before you uninstall the Kubernetes NMState Operator from the Subcription resource used by OLM, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster. If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console". Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Unsubscribe the Kubernetes NMState Operator from the Subcription resource by running the following command: USD oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator Find the ClusterServiceVersion (CSV) resource that associates with the Kubernetes NMState Operator: USD oc get --namespace openshift-nmstate clusterserviceversion Example output that lists a CSV resource NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded Delete the CSV resource. After you delete the file, OLM deletes certain resources, such as RBAC , that it created for the Operator. USD oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0 Delete the nmstate CR and any associated Deployment resources by running the following commands: USD oc -n openshift-nmstate delete nmstate nmstate USD oc delete --all deployments --namespace=openshift-nmstate Delete all the custom resource definition (CRD), such as nmstates , that exist in the nmstate.io namespace by running the following commands: USD oc delete crd nmstates.nmstate.io USD oc delete crd nodenetworkconfigurationenactments.nmstate.io USD oc delete crd nodenetworkstates.nmstate.io USD oc delete crd nodenetworkconfigurationpolicies.nmstate.io Delete the namespace: USD oc delete namespace kubernetes-nmstate 29.2. Observing and updating the node network state and configuration 29.2.1. Viewing the network state of a node Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. Procedure List all the NodeNetworkState objects in the cluster: USD oc get nns Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity: USD oc get nns node01 -o yaml Example output apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: # ... interfaces: # ... route-rules: # ... routes: # ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3 1 The name of the NodeNetworkState object is taken from the node. 2 The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes. 3 Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report. 29.2.2. The NodeNetworkConfigurationPolicy manifest file A NodeNetworkConfigurationPolicy (NNCP) manifest file defines policies that the Kubernetes NMState Operator uses to configure networking for nodes that exist in an OpenShift Container Platform cluster. After you apply a node network policy to a node, the Kubernetes NMState Operator creates an interface on the node. A node network policy includes your requested network configuration and the status of execution for the policy on the cluster as a whole. You can create an NNCP by using either the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. As a postinstallation task you can create an NNCP or edit an existing NNCP. Note Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document. If you want to delete an NNCP, you can use the oc delete nncp command to complete this action. However, this command does not delete any created objects, such as a bridge interface. Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator recreates the removed interface whenever a pod or a node is restarted. To effectively delete the NNCP, the node network policy, and any created interfaces would typically require the following actions: Edit the NNCP and remove interface details from the file. Ensure that you do not remove name , state , and type parameters from the file. Add state: absent under the interfaces.state section of the NNCP. Run oc apply -f <nncp_file_name> . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, the interface that was previously created on each node is now marked absent . Run oc delete nncp to delete the NNCP. Additional resources Example policy configurations for different interfaces Removing an interface from nodes 29.2.3. Managing policy by using the CLI 29.2.3.1. Creating an interface on nodes Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector. You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable field. Procedure Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%" , or an absolute value (number), such as 3 . 5 Optional: Human-readable description for the interface. 6 Optional: Specifies the search and server settings for the DNS server. Create the node network policy: USD oc apply -f br1-eth1-policy.yaml 1 1 File name of the node network configuration policy manifest. Additional resources Example for creating multiple interfaces in the same policy Examples of different IP management methods in policies 29.2.4. Confirming node network policy updates on nodes When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. Procedure To confirm that a policy has been applied to the cluster, list the policies and their status: USD oc get nncp Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy: USD oc get nncp <policy> -o yaml Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster: USD oc get nnce Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration: USD oc get nnce <node>.<policy> -o yaml 29.2.5. Removing an interface from nodes You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent . Removing an interface from a node does not automatically restore the node network configuration to a state. If you want to restore the state, you will need to define that node network configuration in the policy. If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address. Note Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, the object only represents the requested configuration. Similarly, removing an interface does not delete the policy. Procedure Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Changing the state to absent removes the interface. 5 The name of the interface that is to be unattached from the bridge interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. Update the policy on the node and remove the interface: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the policy manifest. 29.2.6. Example policy configurations for different interfaces Before you read the different example NodeNetworkConfigurationPolicy (NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions: When you need to apply a policy to more than one node, create a NodeNetworkConfigurationPolicy manifest for each target node. The Kubernetes NMState Operator applies the policy to each node with a defined NNCP in an unspecified order. Scoping a policy with this approach reduces the length of time for policy application but risks a cluster-wide outage if an error exists in the cluster's configuration. To avoid this type of error, initially apply an NNCP to some nodes, confirm the NNCP is configured correctly for these nodes, and then proceed with applying the policy to the remaining nodes. When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the maxUnavailable parameter in the cluster's configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application. Consider specifying all related network configurations in a single policy. When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object. 29.2.6.1. Example: Linux bridge interface node network configuration policy Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bridge. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 Disables stp in this example. 11 The node NIC to which the bridge attaches. 29.2.6.2. Example: VLAN interface node network configuration policy Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note Define all related configurations for the VLAN interface of a node in a single NodeNetworkConfigurationPolicy manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same NodeNetworkConfigurationPolicy manifest. When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object. The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. When deploying on bare metal, only the <interface_name>.<vlan_number> VLAN format is supported. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a VLAN. 7 The requested state for the interface after creation. 8 The node NIC to which the VLAN is attached. 9 The VLAN tag. 29.2.6.3. Example: Bond interface node network configuration policy Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note OpenShift Container Platform only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Other bond modes are not supported. The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bond. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 The driver mode for the bond. This example uses an active backup mode. 11 Optional: This example uses miimon to inspect the bond link every 140ms. 12 The subordinate node NICs in the bond. 13 Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default. 29.2.6.4. Example: Ethernet interface node network configuration policy Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 29.2.6.5. Example: Multiple interfaces in the same node network configuration policy You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example YAML file creates a bond that is named bond10 across two NICs and VLAN that is named bond10.103 that connects to the bond. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses hostname node selector. 4 11 Name of the interface. 5 12 Optional: Human-readable description of the interface. 6 13 The type of interface. 7 14 The requested state for the interface after creation. 8 The driver mode for the bond. 9 Optional: This example uses miimon to inspect the bond link every 140ms. 10 The subordinate node NICs in the bond. 15 The node NIC to which the VLAN is attached. 16 The VLAN tag. 17 Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address. 18 Enables ipv4 in this example. 29.2.7. Capturing the static IP of a NIC attached to a bridge Important Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 29.2.7.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" capture: eth1-nic: interfaces.name=="eth1" 3 eth1-routes: routes.running.-hop-interface=="eth1" br1-routes: capture.eth1-routes | routes.running.-hop-interface := "br1" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: "{{ capture.br1-routes.routes.running }}" 1 The name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 3 The reference to the node NIC to which the bridge attaches. 4 The type of interface. This example creates a bridge. 5 The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry. 6 The node NIC to which the bridge attaches. Additional resources The NMPolicy project - Policy syntax 29.2.8. Examples: IP management The following example configuration snippets show different methods of IP management. These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. 29.2.8.1. Static The following snippet statically configures an IP address on the Ethernet interface: # ... interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true # ... 1 Replace this value with the static IP address for the interface. 29.2.8.2. No IP address The following snippet ensures that the interface has no IP address: # ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false # ... Important Always set the state parameter to up when you set both the ipv4.enabled and the ipv6.enabled parameter to false to disable an interface. If you set state: down with this configuration, the interface receives a DHCP IP address because of automatic DHCP assignment. 29.2.8.3. Dynamic host configuration The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS: # ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true # ... The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS: # ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true # ... 29.2.8.4. DNS By default, the nmstate API stores DNS values globally as against storing them in a network interface. For certain situations, you must configure a network interface to store DNS values. Tip Setting a DNS configuration is comparable to modifying the /etc/resolv.conf file. To define a DNS configuration for a network interface, you must initially specify the dns-resolver section in the network interface's YAML configuration file. To apply an NNCP configuration to your network interface, you need to run the oc apply -f <nncp_file_name> command. Important You cannot use the br-ex bridge, an OVN-Kubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. The following example shows a default situation that stores DNS values globally: Configure a static DNS without a network interface. Note that when updating the /etc/resolv.conf file on a host node, you do not need to specify an interface, IPv4 or IPv6, in the NodeNetworkConfigurationPolicy (NNCP) manifest. Example of a DNS configuration for a network interface that globally stores DNS values apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251 # ... Important You can specify DNS options under the dns-resolver.config section of your NNCP file as demonstrated in the following example: # ... desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3 # ... If you want to remove the DNS options from your network interface, apply the following configuration to your NNCP and then run the oc apply -f <nncp_file_name> command: # ... dns-resolver: config: {} interfaces: [] # ... The following examples show situations that require configuring a network interface to store DNS values: If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that adds 192.0.2.1 to DNS name servers retrieved from the DHCPv4 network protocol # ... dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true # ... If you need to configure a network interface to store DNS values instead of adopting the default method, which uses the nmstate API to store DNS values globally, you can set static DNS values and static IP addresses in the network interface YAML file. Important Storing DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond. Example configuration that stores DNS values at the interface level # ... dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false # ... If you want to set static DNS search domains and dynamic DNS name servers for your network interface, define the dynamic interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that sets example.com and example.org static DNS search domains along with dynamic DNS name server settings # ... dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true # ... 29.2.8.5. Static routing The following snippet configures a static route and a static IP on interface eth1 . dns-resolver: config: # ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 -hop-address: 192.0.2.1 2 -hop-interface: eth1 table-id: 254 # ... 1 The static IP address for the Ethernet interface. 2 The hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. Important You cannot use the OVN-Kubernetes br-ex bridge as the hop interface when configuring a static route unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. 29.3. Troubleshooting node network configuration If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as: The configuration fails to be applied on the host. The host loses connection to the default gateway. The host loses connection to the API server. 29.3.1. Troubleshooting an incorrect node network configuration policy configuration You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface. To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration. Prerequisites You ensured that an ens01 interface does not exist on your Linux system. Procedure Create a policy on your cluster. The following example creates a simple bridge, br1 that has ens01 as its member: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 # ... Apply the policy to your network interface: USD oc apply -f ens01-bridge-testfail.yaml Example output nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created Verify the status of the policy by running the following command: USD oc get nncp The output shows that the policy failed: Example output NAME STATUS ens01-bridge-testfail FailedToConfigure The policy status alone does not indicate if it failed on all nodes or a subset of nodes. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, the output suggests that the problem is with a specific node configuration. If the policy failed on all nodes, the output suggests that the problem is with the policy. USD oc get nnce The output shows that the policy failed on all nodes: Example output NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure View one of the failed enactments. The following command uses the output tool jsonpath to filter the output: USD oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' Example output [2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01 The example shows the output from an InvalidArgument error that indicates that the ens01 is an unknown port. For this example, you might need to change the port configuration in the policy configuration file. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node: USD oc get nns control-plane-1 -o yaml The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01 : Example output - ipv4: # ... name: ens1 state: up type: ethernet Correct the error by editing the existing policy: USD oc edit nncp ens01-bridge-testfail # ... port: - name: ens1 Save the policy to apply the correction. Check the status of the policy to ensure it updated successfully: USD oc get nncp Example output NAME STATUS ens01-bridge-testfail SuccessfullyConfigured The updated policy is successfully configured on all nodes in the cluster. 29.3.2. Troubleshooting DNS connectivity issues in a disconnected environment If you experience DNS connectivity issues when configuring nmstate in a disconnected environment, you can configure the DNS server to resolve the list of name servers for the domain root-servers.net . Important Ensure that the DNS server includes a name server (NS) entry for the root-servers.net zone. The DNS server does not need to forward a query to an upstream resolver, but the server must return a correct answer for the NS query. 29.3.2.1. Configuring the bind9 DNS named server For a cluster configured to query a bind9 DNS server, you can add the root-servers.net zone to a configuration file that contains at least one NS record. For example you can use the /var/named/named.localhost as a zone file that already matches this criteria. Procedure Add the root-servers.net zone at the end of the /etc/named.conf configuration file by running the following command: USD cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOF Restart the named service by running the following command: USD systemctl restart named Confirm that the root-servers.net zone is present by running the following command: USD journalctl -u named|grep root-servers.net Example output Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0 Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net. 29.3.2.2. Configuring the dnsmasq DNS server If you are using dnsmasq as the DNS server, you can delegate resolution of the root-servers.net domain to another DNS server, for example, by creating a new configuration file that resolves root-servers.net using a DNS server that you specify. Create a configuration file that delegates the domain root-servers.net to another DNS server by running the following command: USD echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf Restart the dnsmasq service by running the following command: USD systemctl restart dnsmasq Confirm that the root-servers.net domain is delegated to another DNS server by running the following command: USD journalctl -u dnsmasq|grep root-servers.net Example output Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net. | [
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF",
"cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF",
"get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase kubernetes-nmstate-operator.4.13.0-202210210157 Succeeded",
"oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator",
"oc get --namespace openshift-nmstate clusterserviceversion",
"NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded",
"oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0",
"oc -n openshift-nmstate delete nmstate nmstate",
"oc delete --all deployments --namespace=openshift-nmstate",
"oc delete crd nmstates.nmstate.io",
"oc delete crd nodenetworkconfigurationenactments.nmstate.io",
"oc delete crd nodenetworkstates.nmstate.io",
"oc delete crd nodenetworkconfigurationpolicies.nmstate.io",
"oc delete namespace kubernetes-nmstate",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8",
"oc apply -f br1-eth1-policy.yaml 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251",
"desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3",
"dns-resolver: config: {} interfaces: []",
"dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true",
"dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false",
"dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true",
"dns-resolver: config: interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"cat >> /etc/named.conf <<EOF zone \"root-servers.net\" IN { type master; file \"named.localhost\"; }; EOF",
"systemctl restart named",
"journalctl -u named|grep root-servers.net",
"Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.",
"echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf",
"systemctl restart dnsmasq",
"journalctl -u dnsmasq|grep root-servers.net",
"Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/kubernetes-nmstate |
Chapter 3. Mirroring images for a disconnected installation | Chapter 3. Mirroring images for a disconnected installation You can ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the Mirroring Operator catalogs for use with disconnected clusters procedure to copy images to a device you can move across network boundaries with. 3.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.3.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.4. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.5. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" \ --insecure=true 1 1 Optional: If you do not want to configure trust for the target registry, add the --insecure=true flag. If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.6. The Cluster Samples Operator in a disconnected environment In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation. 3.6.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 3.7. Mirroring Operator catalogs for use with disconnected clusters You can mirror the Operator contents of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2 . For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Running oc adm catalog mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster. Additional resources Using Operator Lifecycle Manager on restricted networks 3.7.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters has the following prerequisites: Workstation with unrestricted network access. podman version 1.9.3 or later. If you want to filter, or prune , an existing catalog and selectively mirror only a subset of Operators, see the following sections: Installing the opm CLI Updating or filtering a file-based catalog image If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with registry.redhat.io : USD podman login registry.redhat.io Access to a mirror registry that supports Docker v2-2 . On your mirror registry, decide which repository, or namespace, to use for storing mirrored Operator content. For example, you might create an olm-mirror repository. If your mirror registry does not have internet access, connect removable media to your workstation with unrestricted network access. If you are working with private registries, including registry.redhat.io , set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json 3.7.2. Extracting and mirroring catalog contents The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry. Alternatively, if your mirror registry is on a completely disconnected, or airgapped , host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry. 3.7.2.1. Mirroring catalog contents to registries on the same network If your mirror registry is co-located on the same network as your workstation with unrestricted network access, take the following actions on your workstation. Procedure If your mirror registry requires authentication, run the following command to log in to the registry: USD podman login <mirror_registry> Run the following command to extract and mirror the content to the mirror registry: USD oc adm catalog mirror \ <index_image> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify the index image for the catalog that you want to mirror. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. {REG_CREDS} is required for registry.redhat.io . 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 . 6 Optional: Generate only the manifests required for mirroring without actually mirroring the image content to a registry. This option can be useful for reviewing what will be mirrored, and lets you make any changes to the mapping list, if you require only a subset of packages. You can then use the mapping.txt file with the oc image mirror command to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog. Example output src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2 1 Directory for the temporary index.db database generated by the command. 2 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Additional resources Architecture and operating system support for Operators 3.7.2.2. Mirroring catalog contents to airgapped registries If your mirror registry is on a completely disconnected, or airgapped, host, take the following actions. Procedure Run the following command on your workstation with unrestricted network access to mirror the content to local files: USD oc adm catalog mirror \ <index_image> \ 1 file:///local/index \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the index image for the catalog that you want to mirror. 2 Specify the content to mirror to local files in your current directory. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Example output ... info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2 1 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. 2 Record the expanded file:// path that is based on your provided index image. This path is referenced in a subsequent step. This command creates a v2/ directory in your current directory. Copy the v2/ directory to removable media. Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry. If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry: USD podman login <mirror_registry> Run the following command from the parent directory containing the v2/ directory to upload the images from local files to the mirror registry: USD oc adm catalog mirror \ file://local/index/<repository>/<index_image>:<tag> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the file:// path from the command output. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Run the oc adm catalog mirror command again. Use the newly mirrored index image as the source and the same mirror registry target used in the step: USD oc adm catalog mirror \ <mirror_registry>:<port>/<index_image> \ <mirror_registry>:<port>[/<repository>] \ --manifests-only \ 1 [-a USD{REG_CREDS}] \ [--insecure] 1 The --manifests-only flag is required for this step so that the command does not copy all of the mirrored content again. Important This step is required because the image mappings in the imageContentSourcePolicy.yaml file generated during the step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create the ImageContentSourcePolicy object in a later step. After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to enable installation of Operators from OperatorHub. Additional resources Architecture and operating system support for Operators 3.7.3. Generated manifests After mirroring Operator catalog content to your mirror registry, a manifests directory is generated in your current directory. If you mirrored content to a registry on the same network, the directory name takes the following pattern: manifests-<index_image_name>-<random_number> If you mirrored content to a registry on a disconnected host in the section, the directory name takes the following pattern: manifests-index/<repository>/<index_image_name>-<random_number> Note The manifests directory name is referenced in subsequent procedures. The manifests directory contains the following files, some of which might require further modification: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. Important If you mirrored the content to local files, you must modify your catalogSource.yaml file to remove any backslash ( / ) characters from the metadata.name field. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. Important If you used the --manifests-only flag during the mirroring process and want to further trim the subset of packages to mirror, see the steps in the Mirroring a package manifest format catalog image procedure of the OpenShift Container Platform 4.7 documentation about modifying your mapping.txt file and using the file with the oc image mirror command. 3.7.4. Postinstallation requirements After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to populate and enable installation of Operators from OperatorHub. Additional resources Populating OperatorHub from mirrored Operator catalogs Updating or filtering a file-based catalog image 3.8. steps Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere , bare metal , or Amazon Web Services . 3.9. Additional resources See Gathering data about specific features for more information about using must-gather. | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\" --insecure=true 1",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"podman login registry.redhat.io",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<repository>/<index_image_name>-<random_number>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/disconnected_installation_mirroring/installing-mirroring-installation-images |
Chapter 2. Dynamically provisioned OpenShift Data Foundation deployed on VMware | Chapter 2. Dynamically provisioned OpenShift Data Foundation deployed on VMware 2.1. Replacing operational or failed storage devices on VMware infrastructure Create a new Persistent Volume Claim (PVC) on a new volume, and remove the old object storage device (OSD) when one or more virtual machine disks (VMDK) needs to be replaced in OpenShift Data Foundation which is deployed dynamically on VMware infrastructure. Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note The status of the pod is Running , if the OSD you want to replace is healthy. Scale down the OSD deployment for the OSD to be replaced. Each time you want to replace the OSD, update the osd_id_to_remove parameter with the OSD ID, and repeat this step. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Note If the above job does not reach Completed state after 10 minutes, then the job must be deleted and rerun with FORCE_OSD_REMOVAL=true . Navigate to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job pod fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find a relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and view the storage dashboard. | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0",
"oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found.",
"oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/ <node name>",
"chroot /host",
"dmsetup ls| grep <pvc name>",
"ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc get -n openshift-storage pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-2s6w4 Bound pvc-7c9bcaf7-de68-40e1-95f9-0b0d7c0ae2fc 512Gi RWO thin 5m ocs-deviceset-1-0-q8fwh Bound pvc-9e7e00cb-6b33-402e-9dc5-b8df4fd9010f 512Gi RWO thin 1d20h ocs-deviceset-2-0-9v8lq Bound pvc-38cdfcee-ea7e-42a5-a6e1-aaa6d4924291 512Gi RWO thin 1d20h",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node name>",
"chroot /host",
"lsblk"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_vmware |
Chapter 7. Installing a cluster on IBM Power Virtual Server in a disconnected environment | Chapter 7. Installing a cluster on IBM Power Virtual Server in a disconnected environment In OpenShift Container Platform 4.18, you can install a cluster on IBM Cloud(R) in a restricted network by creating an internal mirror of the installation release content on an existing Virtual Private Cloud (VPC) on IBM Cloud(R). 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in IBM Cloud(R). When installing a cluster in a restricted network, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 7.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. Note For installer-provisioned infrastructure in OpenShift Container Platform 4.18, you need to deploy your restricted network cluster in OpenShift Container Platform 4.16 and upgrade it to OpenShift Container Platform 4.18. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). 7.3.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.3.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 7.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.powervs field: vpcName: <existing_vpc> vpcSubnets: <vpcSubnet> For platform.powervs.vpcName , specify the name for the existing IBM Cloud(R). For platform.powervs.vpcSubnets , specify the existing subnets. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: powervs: smtLevel: 8 5 replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: powervs: smtLevel: 8 9 ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 11 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 12 networkType: OVNKubernetes 13 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" 14 region: "powervs-region" vpcRegion: "vpc-region" vpcName: name-of-existing-vpc 15 vpcSubnets: 16 - name-of-existing-vpc-subnet zone: "powervs-zone" serviceInstanceID: "service-instance-id" publish: Internal credentialsMode: Manual pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 11 The machine CIDR must contain the subnets for the compute machines and control plane machines. 12 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 14 The name of an existing resource group. The existing VPC and subnets should be in this resource group. The cluster is deployed to this resource group. 15 Specify the name of an existing VPC. 16 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000. For <credentials> , specify the base64-encoded user name and password for your mirror registry. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. 19 Provide the contents of the certificate file that you used for your mirror registry. 20 Provide the imageContentSources section from the output of the command to mirror the repository. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.14. steps Customize your cluster Optional: Opt out of remote health reporting Optional: Registering your disconnected cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: powervs: smtLevel: 8 5 replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: powervs: smtLevel: 8 9 ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 11 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 12 networkType: OVNKubernetes 13 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 14 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 15 vpcSubnets: 16 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power_virtual_server/installing-restricted-networks-ibm-power-vs |
4.358. xorg-x11-server | 4.358. xorg-x11-server 4.358.1. RHBA-2011:1816 - xorg-x11-server bug fix update Updated xorg-x11-server packages that fix one bug are now available for Red Hat Enterprise Linux 6. X.Org is an open source implementation of the X Window System. It provides the basic low-level functionality that full-fledged graphical user interfaces are designed upon. Bug Fix BZ# 759022 Previously, a bug in Xephyr's input handling code disabled screens on a screen crossing event. When the Xephyr nested X server was configured in a multi-screen setup, the focus was only on the screen where the mouse was located, and only this screen was updated. The aforementioned code has been removed and the Xephyr server now correctly updates screens in multi-screen setups. All users of xorg-x11-server are advised to upgrade to these updated packages, which fix this bugs. 4.358.2. RHBA-2012:0368 - xorg-x11-server bug fix update Updated xorg-x11-server packages that fix one bug are now available for Red Hat Enterprise Linux 6. X.Org is an open source implementation of the X Window System. It provides the basic low-level functionality that full-fledged graphical user interfaces are designed upon. Bug Fix BZ# 783505 Previously, if the X server was configured as a multi-screen setup through multiple "Device" sections in the xorg.conf file, an absolute input device (for example a graphic tablet's stylus) got stuck in the right-most or bottom-most screen. This update changes the screen crossing behavior so that absolute devices are always mapped across all screens. All users of xorg-x11-server are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xorg-x11-server |
1.3.2.2. Inattentive Administration | 1.3.2.2. Inattentive Administration Administrators who fail to patch their systems are one of the greatest threats to server security. According to the SysAdmin, Audit, Network, Security Institute ( SANS ), the primary cause of computer security vulnerability is to "assign untrained people to maintain security and provide neither the training nor the time to make it possible to do the job. This applies as much to inexperienced administrators as it does to overconfident or amotivated administrators. Some administrators fail to patch their servers and workstations, while others fail to watch log messages from the system kernel or network traffic. Another common error is when default passwords or keys to services are left unchanged. For example, some databases have default administration passwords because the database developers assume that the system administrator changes these passwords immediately after installation. If a database administrator fails to change this password, even an inexperienced attacker can use a widely-known default password to gain administrative privileges to the database. These are only a few examples of how inattentive administration can lead to compromised servers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-threats_to_server_security-inattentive_administration |
20.11. Service Level Agreement Policy Enforcement | 20.11. Service Level Agreement Policy Enforcement This procedure describes how to set service level agreement CPU features. Setting a Service Level Agreement CPU Policy Click Compute Virtual Machines . Click New , or select a virtual machine and click Edit . Click the Resource Allocation tab. Specify CPU Shares . Possible options are Low , Medium , High , Custom , and Disabled . Virtual machines set to High receive twice as many shares as Medium , and virtual machines set to Medium receive twice as many shares as virtual machines set to Low . Disabled instructs VDSM to use an older algorithm for determining share dispensation; usually the number of shares dispensed under these conditions is 1020. The CPU consumption of users is now governed by the policy you have set. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/service_level_agreement_policy_enforcement |
23.4. Using the PTP Management Client | 23.4. Using the PTP Management Client The PTP management client, pmc , can be used to obtain additional information from ptp4l as follows: Setting the -b option to zero limits the boundary to the locally running ptp4l instance. A larger boundary value will retrieve the information also from PTP nodes further from the local clock. The retrievable information includes: stepsRemoved is the number of communication paths to the grandmaster clock. offsetFromMaster and master_offset is the last measured offset of the clock from the master in nanoseconds. meanPathDelay is the estimated delay of the synchronization messages sent from the master in nanoseconds. if gmPresent is true, the PTP clock is synchronized to a master, the local clock is not the grandmaster clock. gmIdentity is the grandmaster's identity. For a full list of pmc commands, type the following as root : Additional information is available in the pmc(8) man page. | [
"~]# pmc -u -b 0 'GET CURRENT_DATA_SET' sending: GET CURRENT_DATA_SET 90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT CURRENT_DATA_SET stepsRemoved 1 offsetFromMaster -142.0 meanPathDelay 9310.0",
"~]# pmc -u -b 0 'GET TIME_STATUS_NP' sending: GET TIME_STATUS_NP 90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP master_offset 310 ingress_time 1361545089345029441 cumulativeScaledRateOffset +1.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true gmIdentity 00a069.fffe.0b552d",
"~]# pmc help"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-using_the_ptp_management_client |
Chapter 12. Build configuration resources | Chapter 12. Build configuration resources Use the following procedure to configure build settings. 12.1. Build controller configuration parameters The build.config.openshift.io/cluster resource offers the following configuration parameters. Parameter Description Build Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . spec : Holds user-settable values for the build controller configuration. buildDefaults Controls the default information for builds. defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. You can override values by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the BuildConfig strategy. gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone . Values that are not set here are inherited from DefaultProxy. env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . resources : Defines resource requirements to execute the build. ImageLabel name : Defines the name of the label. It must have non-zero length. buildOverrides Controls override settings for builds. imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. nodeSelector : A selector which must be true for the build pod to fit on a node. tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. BuildList items : Standard object's metadata. 12.2. Configuring build settings You can configure build settings by editing the build.config.openshift.io/cluster resource. Procedure Edit the build.config.openshift.io/cluster resource by entering the following command: USD oc edit build.config.openshift.io/cluster The following is an example build.config.openshift.io/cluster resource: apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 2 name: cluster resourceVersion: "107233" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists 1 Build : Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . 2 buildDefaults : Controls the default information for builds. 3 defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. 4 env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. 5 gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone . 6 imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . 7 resources : Defines resource requirements to execute the build. 8 buildOverrides : Controls override settings for builds. 9 imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. 10 nodeSelector : A selector which must be true for the build pod to fit on a node. 11 tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. | [
"oc edit build.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/build-configuration |
2.3. Built-in Command-Line Tools | 2.3. Built-in Command-Line Tools Red Hat Enterprise Linux 7 provides several tools that can be used to monitor your system from the command line, allowing you to monitor your system outside run level 5. This chapter discusses each tool briefly and provides links to further information about where each tool should be used, and how to use them. 2.3.1. top The top tool, provided by the procps-ng package, gives a dynamic view of the processes in a running system. It can display a variety of information, including a system summary and a list of tasks currently being managed by the Linux kernel. It also has a limited ability to manipulate processes, and to make configuration changes persistent across system restarts. By default, the processes displayed are ordered according to the percentage of CPU usage, so that you can easily see the processes consuming the most resources. Both the information top displays and its operation are highly configurable to allow you to concentrate on different usage statistics as required. For detailed information about using top, see the man page: 2.3.2. ps The ps tool, provided by the procps-ng package, takes a snapshot of a select group of active processes. By default, the group examined is limited to processes that are owned by the current user and associated with the terminal in which ps is run. ps can provide more detailed information about processes than top, but by default it provides a single snapshot of this data, ordered by process identifier. For detailed information about using ps, see the man page: 2.3.3. Virtual Memory Statistics (vmstat) The Virtual Memory Statistics tool, vmstat, provides instant reports on your system's processes, memory, paging, block input/output, interrupts, and CPU activity. Vmstat lets you set a sampling interval so that you can observe system activity in near-real time. vmstat is provided by the procps-ng package. For detailed information about using vmstat, see the man page: 2.3.4. System Activity Reporter (sar) The System Activity Reporter, sar, collects and reports information about system activity that has occurred so far on the current day. The default output displays the current day's CPU usage at 10 minute intervals from the beginning of the day (00:00:00 according to your system clock). You can also use the -i option to set the interval time in seconds, for example, sar -i 60 tells sar to check CPU usage every minute. sar is a useful alternative to manually creating periodic reports on system activity with top. It is provided by the sysstat package. For detailed information about using sar, see the man page: | [
"man top",
"man ps",
"man vmstat",
"man sar"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-built_in_command_line_tools |
Appendix C. Red Hat Virtualization User Interface Plugins | Appendix C. Red Hat Virtualization User Interface Plugins C.1. About Red Hat Virtualization User Interface Plug-ins Red Hat Virtualization supports plug-ins that expose non-standard features. This makes it easier to use the Red Hat Virtualization Administration Portal to integrate with other systems. Each interface plug-in represents a set of user interface extensions that can be packaged and distributed for use with Red Hat Virtualization. Red Hat Virtualization's User Interface plug-ins integrate with the Administration Portal directly on the client using the JavaScript programming language. Plug-ins are invoked by the Administration Portal and executed in the web browser's JavaScript runtime. User Interface plug-ins can use the JavaScript language and its libraries. At key events during runtime, the Administration Portal invokes individual plug-ins via event handler functions representing Administration-Portal-to-plug-in communication. Although the Administration Portal supports multiple event-handler functions, a plug-in declares functions which are of interest only to its implementation. Each plug-in must register relevant event handler functions as part of the plug-in bootstrap sequence before the plug-in is put to use by the administration portal. To facilitate the plug-in-to-Administration-Portal communication that drives the User Interface extension, the Administration Portal exposes the plug-in API as a global (top-level) pluginApi JavaScript object that individual plug-ins can consume. Each plug-in obtains a separate pluginApi instance, allowing the Administration Portal to control plug-in API-function invocation for each plug-in with respect to the plug-in's life cycle. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-red_hat_enterprise_virtualization_user_interface_plugins |
15.2.5. Freshening | 15.2.5. Freshening Freshening a package is similar to upgrading one. Type the following command at a shell prompt: RPM's freshen option checks the versions of the packages specified on the command line against the versions of packages that have already been installed on your system. When a newer version of an already-installed package is processed by RPM's freshen option, it is upgraded to the newer version. However, RPM's freshen option does not install a package if no previously-installed package of the same name exists. This differs from RPM's upgrade option, as an upgrade does install packages, whether or not an older version of the package was already installed. RPM's freshen option works for single packages or package groups. If you have just downloaded a large number of different packages, and you only want to upgrade those packages that are already installed on your system, freshening does the job. If you use freshening, you do not have to delete any unwanted packages from the group that you downloaded before using RPM. In this case, issue the following command: RPM automatically upgrades only those packages that are already installed. | [
"-Fvh foo-1.2-1.i386.rpm",
"-Fvh *.rpm"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Using_RPM-Freshening |
Chapter 22. Configuring NTP Using ntpd | Chapter 22. Configuring NTP Using ntpd 22.1. Introduction to NTP The Network Time Protocol ( NTP ) enables the accurate dissemination of time and date information in order to keep the time clocks on networked computer systems synchronized to a common reference over the network or the Internet. Many standards bodies around the world have atomic clocks which may be made available as a reference. The satellites that make up the Global Position System contain more than one atomic clock, making their time signals potentially very accurate. Their signals can be deliberately degraded for military reasons. An ideal situation would be where each site has a server, with its own reference clock attached, to act as a site-wide time server. Many devices which obtain the time and date via low frequency radio transmissions or the Global Position System (GPS) exist. However for most situations, a range of publicly accessible time servers connected to the Internet at geographically dispersed locations can be used. These NTP servers provide " Coordinated Universal Time " ( UTC ). Information about these time servers can found at www.pool.ntp.org . Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. Logs are used to investigate service and security issues and so timestamps made on different systems must be made by synchronized clocks to be of real value. As systems and networks become increasingly faster, there is a corresponding need for clocks with greater accuracy and resolution. In some countries there are legal obligations to keep accurately synchronized clocks. Please see www.ntp.org for more information. In Linux systems, NTP is implemented by a daemon running in user space. The default NTP daemon in Red Hat Enterprise Linux 6 is ntpd . The user space daemon updates the system clock, which is a software clock running in the kernel. Linux uses a software clock as its system clock for better resolution than the typical embedded hardware clock referred to as the " Real Time Clock " (RTC) . See the rtc(4) and hwclock(8) man pages for information on hardware clocks. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter ( TSC ) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start, the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system clock to be constantly synchronized with external time references. When the system clock is being synchronized by ntpd , the kernel will in turn update the RTC every 11 minutes automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Configuring_NTP_Using_ntpd |
Installing on GCP | Installing on GCP OpenShift Container Platform 4.17 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/index |
Chapter 89. Example decisions in Red Hat Process Automation Manager for an IDE | Chapter 89. Example decisions in Red Hat Process Automation Manager for an IDE Red Hat Process Automation Manager provides example decisions distributed as Java classes that you can import into your integrated development environment (IDE). You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Process Automation Manager projects. The following example decision sets are some of the examples available in Red Hat Process Automation Manager: Hello World example : Demonstrates basic rule execution and use of debug output State example : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups Fibonacci example : Demonstrates recursion and conflict resolution through rule salience Banking example : Demonstrates pattern matching, basic sorting, and calculation Pet Store example : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration Sudoku example : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration House of Doom example : Demonstrates backward chaining and recursion Note For optimization examples provided with Red Hat build of OptaPlanner, see Getting started with Red Hat build of OptaPlanner . 89.1. Importing and executing Red Hat Process Automation Manager example decisions in an IDE You can import Red Hat Process Automation Manager example decisions into your integrated development environment (IDE) and execute them to explore how the rules and code function. You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Process Automation Manager projects. Prerequisites Java 8 or later is installed. Maven 3.5.x or later is installed. An IDE is installed, such as Red Hat CodeReady Studio. Procedure Download and unzip the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal to a temporary directory, such as /rhpam-7.13.5-sources . Open your IDE and select File Import Maven Existing Maven Projects , or the equivalent option for importing a Maven project. Click Browse , navigate to ~/rhpam-7.13.5-sources/src/drools-USDVERSION/drools-examples (or, for the Conway's Game of Life example, ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples ), and import the project. Navigate to the example package that you want to run and find the Java class with the main method. Right-click the Java class and select Run As Java Application to run the example. To run all examples through a basic user interface, run the DroolsExamplesApp.java class (or, for Conway's Game of Life, the DroolsJbpmIntegrationExamplesApp.java class) in the org.drools.examples main class. Figure 89.1. Interface for all examples in drools-examples (DroolsExamplesApp.java) Figure 89.2. Interface for all examples in droolsjbpm-integration-examples (DroolsJbpmIntegrationExamplesApp.java) 89.2. Hello World example decisions (basic rules and debugging) The Hello World example decision set demonstrates how to insert objects into the decision engine working memory, how to match the objects using rules, and how to configure logging to trace the internal activity of the decision engine. The following is an overview of the Hello World example: Name : helloworld Main class : org.drools.examples.helloworld.HelloWorldExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.helloworld.HelloWorld.drl (in src/main/resources ) Objective : Demonstrates basic rule execution and use of debug output In the Hello World example, a KIE session is generated to enable rule execution. All rules require a KIE session for execution. KIE session for rule execution KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession("HelloWorldKS"); 3 1 Obtains the KieServices factory. This is the main interface that applications use to interact with the decision engine. 2 Creates a KieContainer from the project class path. This detects a /META-INF/kmodule.xml file from which it configures and instantiates a KieContainer with a KieModule . 3 Creates a KieSession based on the "HelloWorldKS" KIE session configuration defined in the /META-INF/kmodule.xml file. Note For more information about Red Hat Process Automation Manager project packaging, see Packaging and deploying an Red Hat Process Automation Manager project . Red Hat Process Automation Manager has an event model that exposes internal engine activity. Two default debug listeners, DebugAgendaEventListener and DebugRuleRuntimeEventListener , print debug event information to the System.err output. The KieRuntimeLogger provides execution auditing, the result of which you can view in a graphical viewer. Debug listeners and audit loggers // Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, "./target/helloworld" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, "./target/helloworld", 1000 ); The logger is a specialized implementation built on the Agenda and RuleRuntime listeners. When the decision engine has finished executing, logger.close() is called. The example creates a single Message object with the message "Hello World" , inserts the status HELLO into the KieSession , executes rules with fireAllRules() . Data insertion and execution // Insert facts into the KIE session. final Message message = new Message(); message.setMessage( "Hello World" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules(); Rule execution uses a data model to pass data as inputs and outputs to the KieSession . The data model in this example has two fields: the message , which is a String , and the status , which can be HELLO or GOODBYE . Data model class public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; ... } The two rules are located in the file src/main/resources/org/drools/examples/helloworld/HelloWorld.drl . The when condition of the "Hello World" rule states that the rule is activated for each Message object inserted into the KIE session that has the status Message.HELLO . Additionally, two variable bindings are created: the variable message is bound to the message attribute and the variable m is bound to the matched Message object itself. The then action of the rule specifies to print the content of the bound variable message to System.out , and then changes the values of the message and status attributes of the Message object bound to m . The rule uses the modify statement to apply a block of assignments in one statement and to notify the decision engine of the changes at the end of the block. "Hello World" rule The "Good Bye" rule is similar to the "Hello World" rule except that it matches Message objects that have the status Message.GOODBYE . "Good Bye" rule To execute the example, run the org.drools.examples.helloworld.HelloWorldExample class as a Java application in your IDE. The rule writes to System.out , the debug listener writes to System.err , and the audit logger creates a log file in target/helloworld.log . System.out output in the IDE console System.err output in the IDE console To better understand the execution flow of this example, you can load the audit log file from target/helloworld.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit view shows that the object is inserted, which creates an activation for the "Hello World" rule. The activation is then executed, which updates the Message object and causes the "Good Bye" rule to activate. Finally, the "Good Bye" rule is executed. When you select an event in the Audit View , the origin event, which is the "Activation created" event in this example, is highlighted in green. Figure 89.3. Hello World example Audit View 89.3. State example decisions (forward chaining and conflict resolution) The State example decision set demonstrates how the decision engine uses forward chaining and any changes to facts in the working memory to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values or through agenda groups that you can define in rules. The following is an overview of the State example: Name : state Main classes : org.drools.examples.state.StateExampleUsingSalience , org.drools.examples.state.StateExampleUsingAgendaGroup (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.state.*.drl (in src/main/resources ) Objective : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups A forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. In contrast, a backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. The decision engine in Red Hat Process Automation Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 89.4. Rule evaluation logic using forward and backward chaining In the State example, each State class has fields for its name and its current state (see the class org.drools.examples.state.State ). The following states are the two possible states for each object: NOTRUN FINISHED State class public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here... } The State example contains two versions of the same example to resolve rule execution conflicts: A StateExampleUsingSalience version that resolves conflicts by using rule salience A StateExampleUsingAgendaGroups version that resolves conflicts by using rule agenda groups Both versions of the state example involve four State objects: A , B , C , and D . Initially, their states are set to NOTRUN , which is the default value for the constructor that the example uses. State example using salience The StateExampleUsingSalience version of the State example uses salience values in rules to resolve rule execution conflicts. Rules with a higher salience value are given higher priority when ordered in the activation queue. The example inserts each State instance into the KIE session and then calls fireAllRules() . Salience State example execution final State a = new State( "A" ); final State b = new State( "B" ); final State c = new State( "C" ); final State d = new State( "D" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose(); To execute the example, run the org.drools.examples.state.StateExampleUsingSalience class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Salience State example output in the IDE console Four rules are present. First, the "Bootstrap" rule fires, setting A to state FINISHED , which then causes B to change its state to FINISHED . Objects C and D are both dependent on B , causing a conflict that is resolved by the salience values. To better understand the execution flow of this example, you can load the audit log file from target/state.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows that the assertion of the object A in the state NOTRUN activates the "Bootstrap" rule, while the assertions of the other objects have no immediate effect. Figure 89.5. Salience State example Audit View Rule "Bootstrap" in salience State example The execution of the "Bootstrap" rule changes the state of A to FINISHED , which activates rule "A to B" . Rule "A to B" in salience State example The execution of rule "A to B" changes the state of B to FINISHED , which activates both rules "B to C" and "B to D" , placing their activations onto the decision engine agenda. Rules "B to C" and "B to D" in salience State example From this point on, both rules may fire and, therefore, the rules are in conflict. The conflict resolution strategy enables the decision engine agenda to decide which rule to fire. Rule "B to C" has the higher salience value ( 10 versus the default salience value of 0 ), so it fires first, modifying object C to state FINISHED . The Audit View in your IDE shows the modification of the State object in the rule "A to B" , which results in two activations being in conflict. You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. In this example, the Agenda View shows the breakpoint in the rule "A to B" and the state of the agenda with the two conflicting rules. Rule "B to D" fires last, modifying object D to state FINISHED . Figure 89.6. Salience State example Agenda View State example using agenda groups The StateExampleUsingAgendaGroups version of the State example uses agenda groups in rules to resolve rule execution conflicts. Agenda groups enable you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. In this example, the auto-focus attribute enables rule "B to C" to fire before "B to D" . Rule "B to C" in agenda group State example The rule "B to C" calls setFocus() on the agenda group "B to D" , enabling its active rules to fire, which then enables the rule "B to D" to fire. Rule "B to D" in agenda group State example To execute the example, run the org.drools.examples.state.StateExampleUsingAgendaGroups class as a Java application in your IDE. After the execution, the following output appears in the IDE console window (same as the salience version of the State example): Agenda group State example output in the IDE console Dynamic facts in the State example Another notable concept in this State example is the use of dynamic facts , based on objects that implement a PropertyChangeListener object. In order for the decision engine to see and react to changes of fact properties, the application must notify the decision engine that changes occurred. You can configure this communication explicitly in the rules by using the modify statement, or implicitly by specifying that the facts implement the PropertyChangeSupport interface as defined by the JavaBeans specification. This example demonstrates how to use the PropertyChangeSupport interface to avoid the need for explicit modify statements in the rules. To make use of this interface, ensure that your facts implement PropertyChangeSupport in the same way that the class org.drools.example.State implements it, and then use the following code in the DRL rule file to configure the decision engine to listen for property changes on those facts: Declaring a dynamic fact When you use PropertyChangeListener objects, each setter must implement additional code for the notification. For example, the following setter for state is in the class org.drools.examples : Setter example with PropertyChangeSupport public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( "state", oldState, newState ); } 89.4. Fibonacci example decisions (recursion and conflict resolution) The Fibonacci example decision set demonstrates how the decision engine uses recursion to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values that you can define in rules. The following is an overview of the Fibonacci example: Name : fibonacci Main class : org.drools.examples.fibonacci.FibonacciExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.fibonacci.Fibonacci.drl (in src/main/resources ) Objective : Demonstrates recursion and conflict resolution through rule salience The Fibonacci Numbers form a sequence starting with 0 and 1. The Fibonacci number is obtained by adding the two preceding Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, and so on. The Fibonacci example uses the single fact class Fibonacci with the following two fields: sequence value The sequence field indicates the position of the object in the Fibonacci number sequence. The value field shows the value of that Fibonacci object for that sequence position, where -1 indicates a value that still needs to be computed. Fibonacci class public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here... } To execute the example, run the org.drools.examples.fibonacci.FibonacciExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Fibonacci example output in the IDE console To achieve this behavior in Java, the example inserts a single Fibonacci object with a sequence field of 50 . The example then uses a recursive rule to insert the other 49 Fibonacci objects. Instead of implementing the PropertyChangeSupport interface to use dynamic facts, this example uses the MVEL dialect modify keyword to enable a block setter action and notify the decision engine of changes. Fibonacci example execution ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules(); This example uses the following three rules: "Recurse" "Bootstrap" "Calculate" The rule "Recurse" matches each asserted Fibonacci object with a value of -1 , creating and asserting a new Fibonacci object with a sequence of one less than the currently matched object. Each time a Fibonacci object is added while the one with a sequence field equal to 1 does not exist, the rule re-matches and fires again. The not conditional element is used to stop the rule matching once you have all 50 Fibonacci objects in memory. The rule also has a salience value because you need to have all 50 Fibonacci objects asserted before you execute the "Bootstrap" rule. Rule "Recurse" To better understand the execution flow of this example, you can load the audit log file from target/fibonacci.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the original assertion of the Fibonacci object with a sequence field of 50 , done from Java code. From there on, the Audit View shows the continual recursion of the rule, where each asserted Fibonacci object causes the "Recurse" rule to become activated and to fire again. Figure 89.7. Rule "Recurse" in Audit View When a Fibonacci object with a sequence field of 2 is asserted, the "Bootstrap" rule is matched and activated along with the "Recurse" rule. Notice the multiple restrictions on field sequence that test for equality with 1 or 2 : Rule "Bootstrap" You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. The "Bootstrap" rule does not fire yet because the "Recurse" rule has a higher salience value. Figure 89.8. Rules "Recurse" and "Bootstrap" in Agenda View 1 When a Fibonacci object with a sequence of 1 is asserted, the "Bootstrap" rule is matched again, causing two activations for this rule. The "Recurse" rule does not match and activate because the not conditional element stops the rule matching as soon as a Fibonacci object with a sequence of 1 exists. Figure 89.9. Rules "Recurse" and "Bootstrap" in Agenda View 2 The "Bootstrap" rule sets the objects with a sequence of 1 and 2 to a value of 1 . Now that you have two Fibonacci objects with values not equal to -1 , the "Calculate" rule is able to match. At this point in the example, nearly 50 Fibonacci objects exist in the working memory. You need to select a suitable triple to calculate each of their values in turn. If you use three Fibonacci patterns in a rule without field constraints to confine the possible cross products, the result would be 50x49x48 possible combinations, leading to about 125,000 possible rule firings, most of them incorrect. The "Calculate" rule uses field constraints to evaluate the three Fibonacci patterns in the correct order. This technique is called cross-product matching . The first pattern finds any Fibonacci object with a value != -1 and binds both the pattern and the field. The second Fibonacci object does the same thing, but adds an additional field constraint to ensure that its sequence is greater by one than the Fibonacci object bound to f1 . When this rule fires for the first time, you know that only sequences 1 and 2 have values of 1 , and the two constraints ensure that f1 references sequence 1 and that f2 references sequence 2 . The final pattern finds the Fibonacci object with a value equal to -1 and with a sequence one greater than f2 . At this point in the example, three Fibonacci objects are correctly selected from the available cross products, and you can calculate the value for the third Fibonacci object that is bound to f3 . Rule "Calculate" The modify statement updates the value of the Fibonacci object bound to f3 . This means that you now have another new Fibonacci object with a value not equal to -1 , which allows the "Calculate" rule to re-match and calculate the Fibonacci number. The debug view or Audit View of your IDE shows how the firing of the last "Bootstrap" rule modifies the Fibonacci object, enabling the "Calculate" rule to match, which then modifies another Fibonacci object that enables the "Calculate" rule to match again. This process continues until the value is set for all Fibonacci objects. Figure 89.10. Rules in Audit View 89.5. Pricing example decisions (decision tables) The Pricing example decision set demonstrates how to use a spreadsheet decision table for calculating the retail cost of an insurance policy in tabular format instead of directly in a DRL file. The following is an overview of the Pricing example: Name : decisiontable Main class : org.drools.examples.decisiontable.PricingRuleDTExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.decisiontable.ExamplePolicyPricing.xls (in src/main/resources ) Objective : Demonstrates use of spreadsheet decision tables to define rules Spreadsheet decision tables are XLS or XLSX spreadsheets that contain business rules defined in a tabular format. You can include spreadsheet decision tables with standalone Red Hat Process Automation Manager projects or upload them to projects in Business Central. Each row in a decision table is a rule, and each column is a condition, an action, or another rule attribute. After you create and upload your decision tables into your Red Hat Process Automation Manager project, the rules you defined are compiled into Drools Rule Language (DRL) rules as with all other rule assets. The purpose of the Pricing example is to provide a set of business rules to calculate the base price and a discount for a car driver applying for a specific type of insurance policy. The driver's age and history and the policy type all contribute to calculate the basic premium, and additional rules calculate potential discounts for which the driver might be eligible. To execute the example, run the org.drools.examples.decisiontable.PricingRuleDTExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: The code to execute the example follows the typical execution pattern: the rules are loaded, the facts are inserted, and a stateless KIE session is created. The difference in this example is that the rules are defined in an ExamplePolicyPricing.xls file instead of a DRL file or other source. The spreadsheet file is loaded into the decision engine using templates and DRL rules. Spreadsheet decision table setup The ExamplePolicyPricing.xls spreadsheet contains two decision tables in the first tab: Base pricing rules Promotional discount rules As the example spreadsheet demonstrates, you can use only the first tab of a spreadsheet to create decision tables, but multiple tables can be within a single tab. Decision tables do not necessarily follow top-down logic, but are more of a means to capture data resulting in rules. The evaluation of the rules is not necessarily in the given order, because all of the normal mechanics of the decision engine still apply. This is why you can have multiple decision tables in the same tab of a spreadsheet. The decision tables are executed through the corresponding rule template files BasePricing.drt and PromotionalPricing.drt . These template files reference the decision tables through their template parameter and directly reference the various headers for the conditions and actions in the decision tables. BasePricing.drt rule template file PromotionalPricing.drt rule template file The rules are executed through the kmodule.xml reference of the KIE Session DTableWithTemplateKB , which specifically mentions the ExamplePolicyPricing.xls spreadsheet and is required for successful execution of the rules. This execution method enables you to execute the rules as a standalone unit (as in this example) or to include the rules in a packaged knowledge JAR (KJAR) file, so that the spreadsheet is packaged along with the rules for execution. The following section of the kmodule.xml file is required for the execution of the rules and spreadsheet to work successfully: <kbase name="DecisionTableKB" packages="org.drools.examples.decisiontable"> <ksession name="DecisionTableKS" type="stateless"/> </kbase> <kbase name="DTableWithTemplateKB" packages="org.drools.examples.decisiontable-template"> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/BasePricing.drt" row="3" col="3"/> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/PromotionalPricing.drt" row="18" col="3"/> <ksession name="DTableWithTemplateKS"/> </kbase> As an alternative to executing the decision tables using rule template files, you can use the DecisionTableConfiguration object and specify an input spreadsheet as the input type, such as DecisionTableInputType.xls : DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( "ExamplePolicyPricing.xls", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration ); The Pricing example uses two fact types: Driver Policy . The example sets the default values for both facts in their respective Java classes Driver.java and Policy.java . The Driver is 30 years old, has had no prior claims, and currently has a risk profile of LOW . The Policy that the driver is applying for is COMPREHENSIVE . In any decision table, each row is considered a different rule and each column is a condition or an action. Each row is evaluated in a decision table unless the agenda is cleared upon execution. Decision table spreadsheets (XLS or XLSX) require two key areas that define rule data: A RuleSet area A RuleTable area The RuleSet area of the spreadsheet defines elements that you want to apply globally to all rules in the same package (not only the spreadsheet), such as a rule set name or universal rule attributes. The RuleTable area defines the actual rules (rows) and the conditions, actions, and other rule attributes (columns) that constitute that rule table within the specified rule set. A decision table spreadsheet can contain multiple RuleTable areas, but only one RuleSet area. Figure 89.11. Decision table configuration The RuleTable area also defines the objects to which the rule attributes apply, in this case Driver and Policy , followed by constraints on the objects. For example, the Driver object constraint that defines the Age Bracket column is age >= USD1, age <= USD2 , where the comma-separated range is defined in the table column values, such as 18,24 . Base pricing rules The Base pricing rules decision table in the Pricing example evaluates the age, risk profile, number of claims, and policy type of the driver and produces the base price of the policy based on these conditions. Figure 89.12. Base price calculation The Driver attributes are defined in the following table columns: Age Bracket : The age bracket has a definition for the condition age >=USD1, age <=USD2 , which defines the condition boundaries for the driver's age. This condition column highlights the use of USD1 and USD2 , which is comma delimited in the spreadsheet. You can write these values as 18,24 or 18, 24 and both formats work in the execution of the business rules. Location risk profile : The risk profile is a string that the example program passes always as LOW but can be changed to reflect MED or HIGH . Number of prior claims : The number of claims is defined as an integer that the condition column must exactly equal to trigger the action. The value is not a range, only exact matches. The Policy of the decision table is used in both the conditions and the actions of the rule and has attributes defined in the following table columns: Policy type applying for : The policy type is a condition that is passed as a string that defines the type of coverage: COMPREHENSIVE , FIRE_THEFT , or THIRD_PARTY . Base USD AUD : The basePrice is defined as an ACTION that sets the price through the constraint policy.setBasePrice(USDparam); based on the spreadsheet cells corresponding to this value. When you execute the corresponding DRL rule for this decision table, the then portion of the rule executes this action statement on the true conditions matching the facts and sets the base price to the corresponding value. Record Reason : When the rule successfully executes, this action generates an output message to the System.out console reflecting which rule fired. This is later captured in the application and printed. The example also uses the first column on the left to categorize rules. This column is for annotation only and has no affect on rule execution. Promotional discount rules The Promotional discount rules decision table in the Pricing example evaluates the age, number of prior claims, and policy type of the driver to generate a potential discount on the price of the insurance policy. Figure 89.13. Discount calculation This decision table contains the conditions for the discount for which the driver might be eligible. Similar to the base price calculation, this table evaluates the Age , Number of prior claims of the driver, and the Policy type applying for to determine a Discount % rate to be applied. For example, if the driver is 30 years old, has no prior claims, and is applying for a COMPREHENSIVE policy, the driver is given a discount of 20 percent. 89.6. Pet Store example decisions (agenda groups, global variables, callbacks, and GUI integration) The Pet Store example decision set demonstrates how to use agenda groups and global variables in rules and how to integrate Red Hat Process Automation Manager rules with a graphical user interface (GUI), in this case a Swing-based desktop application. The example also demonstrates how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Pet Store example: Name : petstore Main class : org.drools.examples.petstore.PetStoreExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.petstore.PetStore.drl (in src/main/resources ) Objective : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration In the Pet Store example, the sample PetStoreExample.java class defines the following principal classes (in addition to several classes to handle Swing events): Petstore contains the main() method. PetStoreUI is responsible for creating and displaying the Swing-based GUI. This class contains several smaller classes, mainly for responding to various GUI events, such as user mouse clicks. TableModel holds the table data. This class is essentially a JavaBean that extends the Swing class AbstractTableModel . CheckoutCallback enables the GUI to interact with the rules. Ordershow keeps the items that you want to buy. Purchase stores details of the order and the products that you are buying. Product is a JavaBean containing details of the product available for purchase and its price. Much of the Java code in this example is either plain JavaBean or Swing based. For more information about Swing components, see the Java tutorial on Creating a GUI with JFC/Swing . Rule execution behavior in the Pet Store example Unlike other example decision sets where the facts are asserted and fired immediately, the Pet Store example does not execute the rules until more facts are gathered based on user interaction. The example executes rules through a PetStoreUI object, created by a constructor, that accepts the Vector object stock for collecting the products. The example then uses an instance of the CheckoutCallback class containing the rule base that was previously loaded. Pet Store KIE container and fact execution setup // KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( "Gold Fish", 5 ) ); stock.add( new Product( "Fish Tank", 25 ) ); stock.add( new Product( "Fish Food", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI(); The Java code that fires the rules is in the CheckoutCallBack.checkout() method. This method is triggered when the user clicks Checkout in the UI. Rule execution from CheckoutCallBack.checkout() public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession("PetStoreKS"); ksession.setGlobal( "frame", frame ); ksession.setGlobal( "textArea", this.output ); ksession.insert( new Product( "Gold Fish", 5 ) ); ksession.insert( new Product( "Fish Tank", 25 ) ); ksession.insert( new Product( "Fish Food", 2 ) ); ksession.insert( new Product( "Fish Food Sample", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); } The example code passes two elements into the CheckoutCallBack.checkout() method. One element is the handle for the JFrame Swing component surrounding the output text frame, found at the bottom of the GUI. The second element is a list of order items, which comes from the TableModel that stores the information from the Table area at the upper-right section of the GUI. The for loop transforms the list of order items coming from the GUI into the Order JavaBean, also contained in the file PetStoreExample.java . In this case, the rule is firing in a stateless KIE session because all of the data is stored in Swing components and is not executed until the user clicks Checkout in the UI. Each time the user clicks Checkout , the content of the list is moved from the Swing TableModel into the KIE session working memory and is then executed with the ksession.fireAllRules() method. Within this code, there are nine calls to KieSession . The first of these creates a new KieSession from the KieContainer (the example passed in this KieContainer from the CheckoutCallBack class in the main() method). The two calls pass in the two objects that hold the global variables in the rules: the Swing text area and the Swing frame used for writing messages. More inserts put information on products into the KieSession , as well as the order list. The final call is the standard fireAllRules() . Pet Store rule file imports, global variables, and Java functions The PetStore.drl file contains the standard package and import statements to make various Java classes available to the rules. The rule file also includes global variables to be used within the rules, defined as frame and textArea . The global variables hold references to the Swing components JFrame and JTextArea components that were previously passed on by the Java code that called the setGlobal() method. Unlike standard variables in rules, which expire as soon as the rule has fired, global variables retain their value for the lifetime of the KIE session. This means the contents of these global variables are available for evaluation on all subsequent rules. PetStore.drl package, imports, and global variables package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea The PetStore.drl file also contains two functions that the rules in the file use: PetStore.drl Java functions function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to checkout?", "", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( "checkout" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to buy a tank for your " + total + " fish?", "Purchase Suggestion", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( "SUGGESTION: Would you like to buy a tank for your " + total + " fish? - " ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( "Yes" ); } else { System.out.println( "No" ); } return true; } The two functions perform the following actions: doCheckout() displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. requireTank() displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. Note For this example, all rules and functions are within the same rule file for efficiency. In a production environment, you typically separate the rules and functions in different files or build a static Java method and import the files using the import function, such as import function my.package.name.hello . Pet Store rules with agenda groups Most of the rules in the Pet Store example use agenda groups to control rule execution. Agenda groups allow you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. The Pet Store example uses the following agenda groups for rules: "init" "evaluate" "show items" "checkout" For example, the sample rule "Explode Cart" uses the "init" agenda group to ensure that it has the option to fire and insert shopping cart items into the KIE session working memory: Rule "Explode Cart" This rule matches against all orders that do not yet have their grossTotal calculated. The execution loops for each purchase item in that order. The rule uses the following features related to its agenda group: agenda-group "init" defines the name of the agenda group. In this case, only one rule is in the group. However, neither the Java code nor a rule consequence sets the focus to this group, and therefore it relies on the auto-focus attribute for its chance to fire. auto-focus true ensures that this rule, while being the only rule in the agenda group, gets a chance to fire when fireAllRules() is called from the Java code. kcontext... .setFocus() sets the focus to the "show items" and "evaluate" agenda groups, enabling their rules to fire. In practice, you loop through all items in the order, insert them into memory, and then fire the other rules after each insertion. The "show items" agenda group contains only one rule, "Show Items" . For each purchase in the order currently in the KIE session working memory, the rule logs details to the text area at the bottom of the GUI, based on the textArea variable defined in the rule file. Rule "Show Items" The "evaluate" agenda group also gains focus from the "Explode Cart" rule. This agenda group contains two rules, "Free Fish Food Sample" and "Suggest Tank" , which are executed in that order. Rule "Free Fish Food Sample" The rule "Free Fish Food Sample" fires only if all of the following conditions are true: 1 The agenda group "evaluate" is being evaluated in the rules execution. 2 User does not already have fish food. 3 User does not already have a free fish food sample. 4 User has a goldfish in the order. If the order facts meet all of these requirements, then a new product is created (Fish Food Sample) and is added to the order in working memory. Rule "Suggest Tank" The rule "Suggest Tank" fires only if the following conditions are true: 1 User does not have a fish tank in the order. 2 User has more than five fish in the order. When the rule fires, it calls the requireTank() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. When the rule calls the requireTank() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. The "do checkout" rule in the Pet Store example has no agenda group and no when conditions, so the rule is always executed and considered part of the default MAIN agenda group. Rule "do checkout" When the rule fires, it calls the doCheckout() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. When the rule calls the doCheckout() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. Note This example also demonstrates a troubleshooting technique if results are not executing as you expect: You can remove the conditions from the when statement of a rule and test the action in the then statement to verify that the action is performed correctly. The "checkout" agenda group contains three rules for processing the order checkout and applying any discounts: "Gross Total" , "Apply 5% Discount" , and "Apply 10% Discount" . Rules "Gross Total", "Apply 5% Discount", and "Apply 10% Discount" If the user has not already calculated the gross total, the Gross Total accumulates the product prices into a total, puts this total into the KIE session, and displays it through the Swing JTextArea using the textArea global variable. If the gross total is between 10 and 20 (currency units), the "Apply 5% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. If the gross total is not less than 20 , the "Apply 10% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. Pet Store example execution Similar to other Red Hat Process Automation Manager decision examples, you execute the Pet Store example by running the org.drools.examples.petstore.PetStoreExample class as a Java application in your IDE. When you execute the Pet Store example, the Pet Store Demo GUI window appears. This window displays a list of available products (upper left), an empty list of selected products (upper right), Checkout and Reset buttons (middle), and an empty system messages area (bottom). Figure 89.14. Pet Store example GUI after launch The following events occurred in this example to establish this execution behavior: The main() method has run and loaded the rule base but has not yet fired the rules. So far, this is the only code in connection with rules that has been run. A new PetStoreUI object has been created and given a handle for the rule base, for later use. Various Swing components have performed their functions, and the initial UI screen is displayed and waits for user input. You can click various products from the list to explore the UI setup: Figure 89.15. Explore the Pet Store example GUI No rules code has been fired yet. The UI uses Swing code to detect user mouse clicks and add selected products to the TableModel object for display in the upper-right corner of the UI. This example illustrates the Model-View-Controller design pattern. When you click Checkout , the rules are then fired in the following way: Method CheckOutCallBack.checkout() is called (eventually) by the Swing class waiting for a user to click Checkout . This inserts the data from the TableModel object (upper-right corner of the UI) into the KIE session working memory. The method then fires the rules. The "Explode Cart" rule is the first to fire, with the auto-focus attribute set to true . The rule loops through all of the products in the cart, ensures that the products are in the working memory, and then gives the "show Items" and "evaluate" agenda groups the option to fire. The rules in these groups add the contents of the cart to the text area (bottom of the UI), evaluate if you are eligible for free fish food, and determine whether to ask if you want to buy a fish tank. Figure 89.16. Fish tank qualification The "do checkout" rule is the to fire because no other agenda group currently has focus and because it is part of the default MAIN agenda group. This rule always calls the doCheckout() function, which asks you if you want to check out. The doCheckout() function sets the focus to the "checkout" agenda group, giving the rules in that group the option to fire. The rules in the "checkout" agenda group display the contents of the cart and apply the appropriate discount. Swing then waits for user input to either select more products (and cause the rules to fire again) or to close the UI. Figure 89.17. Pet Store example GUI after all rules have fired You can add more System.out calls to demonstrate this flow of events in your IDE console: System.out output in the IDE console 89.7. Honest Politician example decisions (truth maintenance and salience) The Honest Politician example decision set demonstrates the concept of truth maintenance with logical insertions and the use of salience in rules. The following is an overview of the Honest Politician example: Name : honestpolitician Main class : org.drools.examples.honestpolitician.HonestPoliticianExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.honestpolitician.HonestPolitician.drl (in src/main/resources ) Objective : Demonstrates the concept of truth maintenance based on the logical insertion of facts and the use of salience in rules The basic premise of the Honest Politician example is that an object can only exist while a statement is true. A rule consequence can logically insert an object with the insertLogical() method. This means the object remains in the KIE session working memory as long as the rule that logically inserted it remains true. When the rule is no longer true, the object is automatically retracted. In this example, rule execution causes a group of politicians to change from being honest to being dishonest as a result of a corrupt corporation. As each politician is evaluated, they start out with their honesty attribute being set to true , but a rule fires that makes the politicians no longer honest. As they switch their state from being honest to dishonest, they are then removed from the working memory. The rule salience notifies the decision engine how to prioritize any rules that have a salience defined for them, otherwise utilizing the default salience value of 0 . Rules with a higher salience value are given higher priority when ordered in the activation queue. Politician and Hope classes The sample class Politician in the example is configured for an honest politician. The Politician class is made up of a String item name and a Boolean item honest : Politician class public class Politician { private String name; private boolean honest; ... } The Hope class determines if a Hope object exists. This class has no meaningful members, but is present in the working memory as long as society has hope. Hope class public class Hope { public Hope() { } } Rule definitions for politician honesty In the Honest Politician example, when at least one honest politician exists in the working memory, the "We have an honest Politician" rule logically inserts a new Hope object. As soon as all politicians become dishonest, the Hope object is automatically retracted. This rule has a salience attribute with a value of 10 to ensure that it fires before any other rule, because at that stage the "Hope is Dead" rule is true. Rule "We have an honest politician" As soon as a Hope object exists, the "Hope Lives" rule matches and fires. This rule also has a salience value of 10 so that it takes priority over the "Corrupt the Honest" rule. Rule "Hope Lives" Initially, four honest politicians exist so this rule has four activations, all in conflict. Each rule fires in turn, corrupting each politician so that they are no longer honest. When all four politicians have been corrupted, no politicians have the property honest == true . The rule "We have an honest Politician" is no longer true and the object it logically inserted (due to the last execution of new Hope() ) is automatically retracted. Rule "Corrupt the Honest" With the Hope object automatically retracted through the truth maintenance system, the conditional element not applied to Hope is no longer true so that the "Hope is Dead" rule matches and fires. Rule "Hope is Dead" Example execution and audit trail In the HonestPoliticianExample.java class, the four politicians with the honest state set to true are inserted for evaluation against the defined business rules: HonestPoliticianExample.java class execution public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); } To execute the example, run the org.drools.examples.honestpolitician.HonestPoliticianExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console The output shows that, while there is at least one honest politician, democracy lives. However, as each politician is corrupted by some corporation, all politicians become dishonest, and democracy is dead. To better understand the execution flow of this example, you can modify the HonestPoliticianExample.java class to include a DebugRuleRuntimeEventListener listener and an audit logger to view execution details: HonestPoliticianExample.java class with an audit logger package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, "./target/honestpolitician" ); 6 ksession.fireAllRules(); ksession.dispose(); } } 1 Adds to your imports the packages that handle the DebugAgendaEventListener and DebugRuleRuntimeEventListener 2 Creates a KieServices Factory and a ks element to produce the logs because this audit log is not available at the KieContainer level 3 Modifies the execute method to use both KieServices and KieContainer 4 Modifies the execute method to pass in KieServices in addition to the KieContainer 5 Creates the listeners 6 Builds the log that can be passed into the debug view or Audit View or your IDE after executing of the rules When you run the Honest Politician with this modified logging capability, you can load the audit log file from target/honestpolitician.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the flow of executions, insertions, and retractions as defined in the example classes and rules: Figure 89.18. Honest Politician example Audit View When the first politician is inserted, two activations occur. The rule "We have an honest Politician" is activated only one time for the first inserted politician because it uses an exists conditional element, which matches when at least one politician is inserted. The rule "Hope is Dead" is also activated at this stage because the Hope object is not yet inserted. The rule "We have an honest Politician" fires first because it has a higher salience value than the rule "Hope is Dead" , and inserts the Hope object (highlighted in green). The insertion of the Hope object activates the rule "Hope Lives" and deactivates the rule "Hope is Dead" . The insertion also activates the rule "Corrupt the Honest" for each inserted honest politician. The rule "Hope Lives" is executed and prints "Hurrah!!! Democracy Lives" . , for each politician, the rule "Corrupt the Honest" fires, printing "I'm an evil corporation and I have corrupted X" , where X is the name of the politician, and modifies the politician honesty value to false . When the last honest politician is corrupted, Hope is automatically retracted by the truth maintenance system (highlighted in blue). The green highlighted area shows the origin of the currently selected blue highlighted area. After the Hope fact is retracted, the rule "Hope is dead" fires, printing "We are all Doomed!!! Democracy is Dead" . 89.8. Sudoku example decisions (complex pattern matching, callbacks, and GUI integration) The Sudoku example decision set, based on the popular number puzzle Sudoku, demonstrates how to use rules in Red Hat Process Automation Manager to find a solution in a large potential solution space based on various constraints. This example also shows how to integrate Red Hat Process Automation Manager rules into a graphical user interface (GUI), in this case a Swing-based desktop application, and how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Sudoku example: Name : sudoku Main class : org.drools.examples.sudoku.SudokuExample (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.sudoku.*.drl (in src/main/resources ) Objective : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration Sudoku is a logic-based number placement puzzle. The objective is to fill a 9x9 grid so that each column, each row, and each of the nine 3x3 zones contains the digits from 1 to 9 only one time. The puzzle setter provides a partially completed grid and the puzzle solver's task is to complete the grid with these constraints. The general strategy to solve the problem is to ensure that when you insert a new number, it must be unique in its particular 3x3 zone, row, and column. This Sudoku example decision set uses Red Hat Process Automation Manager rules to solve Sudoku puzzles from a range of difficulty levels, and to attempt to resolve flawed puzzles that contain invalid entries. Sudoku example execution and interaction Similar to other Red Hat Process Automation Manager decision examples, you execute the Sudoku example by running the org.drools.examples.sudoku.SudokuExample class as a Java application in your IDE. When you execute the Sudoku example, the Drools Sudoku Example GUI window appears. This window contains an empty grid, but the program comes with various grids stored internally that you can load and solve. Click File Samples Simple to load one of the examples. Notice that all buttons are disabled until a grid is loaded. Figure 89.19. Sudoku example GUI after launch When you load the Simple example, the grid is filled according to the puzzle's initial state. Figure 89.20. Sudoku example GUI after loading Simple sample Choose from the following options: Click Solve to fire the rules defined in the Sudoku example that fill out the remaining values and that make the buttons inactive again. Figure 89.21. Simple sample solved Click Step to see the digit found by the rule set. The console window in your IDE displays detailed information about the rules that are executing to solve the step. Step execution output in the IDE console Click Dump to see the state of the grid, with cells showing either the established value or the remaining possibilities. Dump execution output in the IDE console The Sudoku example includes a deliberately broken sample file that the rules defined in the example can resolve. Click File Samples !DELIBERATELY BROKEN! to load the broken sample. The grid starts with some issues, for example, the value 5 appears two times in the first row, which is not allowed. Figure 89.22. Broken Sudoku example initial state Click Solve to apply the solving rules to this invalid grid. The associated solving rules in the Sudoku example detect the issues in the sample and attempts to solve the puzzle as far as possible. This process does not complete and leaves some cells empty. The solving rule activity is displayed in the IDE console window: Detected issues in the broken sample Figure 89.23. Broken sample solution attempt The sample Sudoku files labeled Hard are more complex and the solving rules might not be able to solve them. The unsuccessful solution attempt is displayed in the IDE console window: Hard sample unresolved The rules that work to solve the broken sample implement standard solving techniques based on the sets of values that are still candidates for a cell. For example, if a set contains a single value, then this is the value for the cell. For a single occurrence of a value in one of the groups of nine cells, the rules insert a fact of type Setting with the solution value for some specific cell. This fact causes the elimination of this value from all other cells in any of the groups the cell belongs to and the value is retracted. Other rules in the example reduce the permissible values for some cells. The rules "naked pair" , "hidden pair in row" , "hidden pair in column" , and "hidden pair in square" eliminate possibilities but do not establish solutions. The rules "X-wings in rows" , "`X-wings in columns"`, "intersection removal row" , and "intersection removal column" perform more sophisticated eliminations. Sudoku example classes The package org.drools.examples.sudoku.swing contains the following core set of classes that implement a framework for Sudoku puzzles: The SudokuGridModel class defines an interface that is implemented to store a Sudoku puzzle as a 9x9 grid of Cell objects. The SudokuGridView class is a Swing component that can visualize any implementation of the SudokuGridModel class. The SudokuGridEvent and SudokuGridListener classes communicate state changes between the model and the view. Events are fired when a cell value is resolved or changed. The SudokuGridSamples class provides partially filled Sudoku puzzles for demonstration purposes. Note This package does not have any dependencies on Red Hat Process Automation Manager libraries. The package org.drools.examples.sudoku contains the following core set of classes that implement the elementary Cell object and its various aggregations: The CellFile class, with subtypes CellRow , CellCol , and CellSqr , all of which are subtypes of the CellGroup class. The Cell and CellGroup subclasses of SetOfNine , which provides a property free with the type Set<Integer> . For a Cell class, the set represents the individual candidate set. For a CellGroup class, the set is the union of all candidate sets of its cells (the set of digits that still need to be allocated). In the Sudoku example are 81 Cell and 27 CellGroup objects and a linkage provided by the Cell properties cellRow , cellCol , and cellSqr , and by the CellGroup property cells (a list of Cell objects). With these components, you can write rules that detect the specific situations that permit the allocation of a value to a cell or the elimination of a value from some candidate set. The Setting class is used to trigger the operations that accompany the allocation of a value. The presence of a Setting fact is used in all rules that detect a new situation in order to avoid reactions to inconsistent intermediary states. The Stepping class is used in a low priority rule to execute an emergency halt when a "Step" does not terminate regularly. This behavior indicates that the program cannot solve the puzzle. The main class org.drools.examples.sudoku.SudokuExample implements a Java application combining all of these components. Sudoku validation rules (validate.drl) The validate.drl file in the Sudoku example contains validation rules that detect duplicate numbers in cell groups. They are combined in a "validate" agenda group that enables the rules to be explicitly activated after a user loads the puzzle. The when conditions of the three rules "duplicate in cell ... " all function in the following ways: The first condition in the rule locates a cell with an allocated value. The second condition in the rule pulls in any of the three cell groups to which the cell belongs. The final condition finds a cell (other than the first one) with the same value as the first cell and in the same row, column, or square, depending on the rule. Rules "duplicate in cell ... " The rule "terminate group" is the last to fire. This rule prints a message and stops the sequence. Rule "terminate group" Sudoku solving rules (sudoku.drl) The sudoku.drl file in the Sudoku example contains three types of rules: one group handles the allocation of a number to a cell, another group detects feasible allocations, and the third group eliminates values from candidate sets. The rules "set a value" , "eliminate a value from Cell" , and "retract setting" depend on the presence of a Setting object. The first rule handles the assignment to the cell and the operations for removing the value from the free sets of the three groups of the cell. This group also reduces a counter that, when zero, returns control to the Java application that has called fireUntilHalt() . The purpose of the rule "eliminate a value from Cell" is to reduce the candidate lists of all cells that are related to the newly assigned cell. Finally, when all eliminations have been made, the rule "retract setting" retracts the triggering Setting fact. Rules "set a value", "eliminate a value from a Cell", and "retract setting" Two solving rules detect a situation where an allocation of a number to a cell is possible. The rule "single" fires for a Cell with a candidate set containing a single number. The rule "hidden single" fires when no cell exists with a single candidate, but when a cell exists containing a candidate, this candidate is absent from all other cells in one of the three groups to which the cell belongs. Both rules create and insert a Setting fact. Rules "single" and "hidden single" Rules from the largest group, either individually or in groups of two or three, implement various solving techniques used for solving Sudoku puzzles manually. The rule "naked pair" detects identical candidate sets of size 2 in two cells of a group. These two values may be removed from all other candidate sets of that group. Rule "naked pair" The three rules "hidden pair in ... " functions similarly to the rule "naked pair" . These rules detect a subset of two numbers in exactly two cells of a group, with neither value occurring in any of the other cells of the group. This means that all other candidates can be eliminated from the two cells harboring the hidden pair. Rules "hidden pair in ... " Two rules deal with "X-wings" in rows and columns. When only two possible cells for a value exist in each of two different rows (or columns) and these candidates lie also in the same columns (or rows), then all other candidates for this value in the columns (or rows) can be eliminated. When you follow the pattern sequence in one of these rules, notice how the conditions that are conveniently expressed by words such as same or only result in patterns with suitable constraints or that are prefixed with not . Rules "X-wings in ... " The two rules "intersection removal ... " are based on the restricted occurrence of some number within one square, either in a single row or in a single column. This means that this number must be in one of those two or three cells of the row or column and can be removed from the candidate sets of all other cells of the group. The pattern establishes the restricted occurrence and then fires for each cell outside of the square and within the same cell file. Rules "intersection removal ... " These rules are sufficient for many but not all Sudoku puzzles. To solve very difficult grids, the rule set requires more complex rules. (Ultimately, some puzzles can be solved only by trial and error.) 89.9. Conway's Game of Life example decisions (ruleflow groups and GUI integration) The Conway's Game of Life example decision set, based on the famous cellular automaton by John Conway, demonstrates how to use ruleflow groups in rules to control rule execution. The example also demonstrates how to integrate Red Hat Process Automation Manager rules with a graphical user interface (GUI), in this case a Swing-based implementation of Conway's Game of Life. The following is an overview of the Conway's Game of Life (Conway) example: Name : conway Main classes : org.drools.examples.conway.ConwayRuleFlowGroupRun , org.drools.examples.conway.ConwayAgendaGroupRun (in src/main/java ) Module : droolsjbpm-integration-examples Type : Java application Rule files : org.drools.examples.conway.*.drl (in src/main/resources ) Objective : Demonstrates ruleflow groups and GUI integration Note The Conway's Game of Life example is separate from most of the other example decision sets in Red Hat Process Automation Manager and is located in ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples of the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal . In Conway's Game of Life, a user interacts with the game by creating an initial configuration or an advanced pattern with defined properties and then observing how the initial state evolves. The objective of the game is to show the development of a population, generation by generation. Each generation results from the preceding one, based on the simultaneous evaluation of all cells. The following basic rules govern what the generation looks like: If a live cell has fewer than two live neighbors, it dies of loneliness. If a live cell has more than three live neighbors, it dies from overcrowding. If a dead cell has exactly three live neighbors, it comes to life. Any cell that does not meet any of those criteria is left as is for the generation. The Conway's Game of Life example uses Red Hat Process Automation Manager rules with ruleflow-group attributes to define the pattern implemented in the game. The example also contains a version of the decision set that achieves the same behavior using agenda groups. Agenda groups enable you to partition the decision engine agenda to provide execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. This overview does not explore the version of the Conway example using agenda groups. For more information about agenda groups, see the Red Hat Process Automation Manager example decision sets that specifically address agenda groups. Conway example execution and interaction Similar to other Red Hat Process Automation Manager decision examples, you execute the Conway ruleflow example by running the org.drools.examples.conway.ConwayRuleFlowGroupRun class as a Java application in your IDE. When you execute the Conway example, the Conway's Game of Life GUI window appears. This window contains an empty grid, or "arena" where the life simulation takes place. Initially the grid is empty because no live cells are in the system yet. Figure 89.24. Conway example GUI after launch Select a predefined pattern from the Pattern drop-down menu and click Generation to click through each population generation. Each cell is either alive or dead, where live cells contain a green ball. As the population evolves from the initial pattern, cells live or die relative to neighboring cells, according to the rules of the game. Figure 89.25. Generation evolution in Conway example Neighbors include not only cells to the left, right, top, and bottom but also cells that are connected diagonally, so that each cell has a total of eight neighbors. Exceptions are the corner cells, which have only three neighbors, and the cells along the four borders, with five neighbors each. You can manually intervene to create or kill cells by clicking the cell. To run through an evolution automatically from the initial pattern, click Start . Conway example rules with ruleflow groups The rules in the ConwayRuleFlowGroupRun example use ruleflow groups to control rule execution. A ruleflow group is a group of rules associated by the ruleflow-group rule attribute. These rules can only fire when the group is activated. The group itself can only become active when the elaboration of the ruleflow diagram reaches the node representing the group. The Conway example uses the following ruleflow groups for rules: "register neighbor" "evaluate" "calculate" "reset calculate" "birth" "kill" "kill all" All of the Cell objects are inserted into the KIE session and the "register ... " rules in the ruleflow group "register neighbor" are allowed to execute by the ruleflow process. This group of four rules creates Neighbor relations between some cell and its northeastern, northern, northwestern, and western neighbors. This relation is bidirectional and handles the other four directions. Border cells do not require any special treatment. These cells are not paired with neighboring cells where there is not any. By the time all activations have fired for these rules, all cells are related to all their neighboring cells. Rules "register ... " After all the cells are inserted, some Java code applies the pattern to the grid, setting certain cells to Live . Then, when the user clicks Start or Generation , the example executes the Generation ruleflow. This ruleflow manages all changes of cells in each generation cycle. Figure 89.26. Generation ruleflow The ruleflow process enters the "evaluate" ruleflow group and any active rules in the group can fire. The rules "Kill the ... " and "Give Birth" in this group apply the game rules to birth or kill cells. The example uses the phase attribute to drive the reasoning of the Cell object by specific groups of rules. Typically, the phase is tied to a ruleflow group in the ruleflow process definition. Notice that the example does not change the state of any Cell objects at this point because it must complete the full evaluation before those changes can be applied. The example sets the cell to a phase that is either Phase.KILL or Phase.BIRTH , which is used later to control actions applied to the Cell object. Rules "Kill the ... " and "Give Birth" After all Cell objects in the grid have been evaluated, the example uses the "reset calculate" rule to clear any activations in the "calculate" ruleflow group. The example then enters a split in the ruleflow that enables the rules "kill" and "birth" to fire, if the ruleflow group is activated. These rules apply the state change. Rules "reset calculate", "kill", and "birth" At this stage, several Cell objects have been modified with the state changed to either LIVE or DEAD . When a cell becomes live or dead, the example uses the Neighbor relation in the rules "Calculate ... " to iterate over all surrounding cells, increasing or decreasing the liveNeighbor count. Any cell that has its count changed is also set to the EVALUATE phase to make sure it is included in the reasoning during the evaluation stage of the ruleflow process. After the live count has been determined and set for all cells, the ruleflow process ends. If the user initially clicked Start , the decision engine restarts the ruleflow at that point. If the user initially clicked Generation , the user can request another generation. Rules "Calculate ... " 89.10. House of Doom example decisions (backward chaining and recursion) The House of Doom example decision set demonstrates how the decision engine uses backward chaining and recursion to reach defined goals or subgoals in a hierarchical system. The following is an overview of the House of Doom example: Name : backwardchaining Main class : org.drools.examples.backwardchaining.HouseOfDoomMain (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.backwardchaining.BC-Example.drl (in src/main/resources ) Objective : Demonstrates backward chaining and recursion A backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. In contrast, a forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. The decision engine in Red Hat Process Automation Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 89.27. Rule evaluation logic using forward and backward chaining The House of Doom example uses rules with various types of queries to find the location of rooms and items within the house. The sample class Location.java contains the item and location elements used in the example. The sample class HouseOfDoomMain.java inserts the items or rooms in their respective locations in the house and executes the rules. Items and locations in HouseOfDoomMain.java class ksession.insert( new Location("Office", "House") ); ksession.insert( new Location("Kitchen", "House") ); ksession.insert( new Location("Knife", "Kitchen") ); ksession.insert( new Location("Cheese", "Kitchen") ); ksession.insert( new Location("Desk", "Office") ); ksession.insert( new Location("Chair", "Office") ); ksession.insert( new Location("Computer", "Desk") ); ksession.insert( new Location("Drawer", "Desk") ); The example rules rely on backward chaining and recursion to determine the location of all items and rooms in the house structure. The following diagram illustrates the structure of the House of Doom and the items and rooms within it: Figure 89.28. House of Doom structure To execute the example, run the org.drools.examples.backwardchaining.HouseOfDoomMain class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console All rules in the example have fired to detect the location of all items in the house and to print the location of each in the output. Recursive query and related rules A recursive query repeatedly searches through the hierarchy of a data structure for relationships between elements. In the House of Doom example, the BC-Example.drl file contains an isContainedIn query that most of the rules in the example use to recursively evaluate the house data structure for data inserted into the decision engine: Recursive query in BC-Example.drl The rule "go" prints every string inserted into the system to determine how items are implemented, and the rule "go1" calls the query isContainedIn : Rules "go" and "go1" The example inserts the "go1" string into the decision engine and activates the "go1" rule to detect that item Office is in the location House : Insert string and fire rules Rule "go1" output in the IDE console Transitive closure rule Transitive closure is a relationship between an element contained in a parent element that is multiple levels higher in a hierarchical structure. The rule "go2" identifies the transitive closure relationship of the Drawer and the House : The Drawer is in the Desk in the Office in the House . The example inserts the "go2" string into the decision engine and activates the "go2" rule to detect that item Drawer is ultimately within the location House : Insert string and fire rules Rule "go2" output in the IDE console The decision engine determines this outcome based on the following logic: The query recursively searches through several levels in the house to detect the transitive closure between Drawer and House . Instead of using Location( x, y; ) , the query uses the value of (z, y; ) because Drawer is not directly in House . The z argument is currently unbound, which means it has no value and returns everything that is in the argument. The y argument is currently bound to House , so z returns Office and Kitchen . The query gathers information from the Office and checks recursively if the Drawer is in the Office . The query line isContainedIn( x, z; ) is called for these parameters. No instance of Drawer exists directly in Office , so no match is found. With z unbound, the query returns data within the Office and determines that z == Desk . The isContainedIn query recursively searches three times, and on the third time, the query detects an instance of Drawer in Desk . After this match on the first location, the query recursively searches back up the structure to determine that the Drawer is in the Desk , the Desk is in the Office , and the Office is in the House . Therefore, the Drawer is in the House and the rule is satisfied. Reactive query rule A reactive query searches through the hierarchy of a data structure for relationships between elements and is dynamically updated when elements in the structure are modified. The rule "go3" functions as a reactive query that detects if a new item Key ever becomes present in the Office by transitive closure: A Key in the Drawer in the Office . Rule "go3" The example inserts the "go3" string into the decision engine and activates the "go3" rule. Initially, this rule is not satisfied because no item Key exists in the house structure, so the rule produces no output. Insert string and fire rules Rule "go3" output in the IDE console (unsatisfied) The example then inserts a new item Key in the location Drawer , which is in Office . This change satisfies the transitive closure in the "go3" rule and the output is populated accordingly. Insert new item location and fire rules Rule "go3" output in the IDE console (satisfied) This change also adds another level in the structure that the query includes in subsequent recursive searches. Queries with unbound arguments in rules A query with one or more unbound arguments returns all undefined (unbound) items within a defined (bound) argument of the query. If all arguments in a query are unbound, then the query returns all items within the scope of the query. The rule "go4" uses an unbound argument thing to search for all items within the bound argument Office , instead of using a bound argument to search for a specific item in the Office : Rule "go4" The example inserts the "go4" string into the decision engine and activates the "go4" rule to return all items in the Office : Insert string and fire rules Rule "go4" output in the IDE console The rule "go5" uses both unbound arguments thing and location to search for all items and their locations in the entire House data structure: Rule "go5" The example inserts the "go5" string into the decision engine and activates the "go5" rule to return all items and their locations in the House data structure: Insert string and fire rules Rule "go5" output in the IDE console | [
"KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession(\"HelloWorldKS\"); 3",
"// Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, \"./target/helloworld\" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, \"./target/helloworld\", 1000 );",
"// Insert facts into the KIE session. final Message message = new Message(); message.setMessage( \"Hello World\" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules();",
"public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; }",
"rule \"Hello World\" when m : Message( status == Message.HELLO, message : message ) then System.out.println( message ); modify ( m ) { message = \"Goodbye cruel world\", status = Message.GOODBYE }; end",
"rule \"Good Bye\" when Message( status == Message.GOODBYE, message : message ) then System.out.println( message ); end",
"Hello World Goodbye cruel world",
"==>[ActivationCreated(0): rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectInserted: handle=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [BeforeActivationFired: rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] ==>[ActivationCreated(4): rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectUpdated: handle=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; old_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96; new_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [AfterActivationFired(0): rule=Hello World] [BeforeActivationFired: rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [AfterActivationFired(4): rule=Good Bye]",
"public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here }",
"final State a = new State( \"A\" ); final State b = new State( \"B\" ); final State c = new State( \"C\" ); final State d = new State( \"D\" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose();",
"A finished B finished C finished D finished",
"rule \"Bootstrap\" when a : State(name == \"A\", state == State.NOTRUN ) then System.out.println(a.getName() + \" finished\" ); a.setState( State.FINISHED ); end",
"rule \"A to B\" when State(name == \"A\", state == State.FINISHED ) b : State(name == \"B\", state == State.NOTRUN ) then System.out.println(b.getName() + \" finished\" ); b.setState( State.FINISHED ); end",
"rule \"B to C\" salience 10 when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); end rule \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end",
"rule \"B to C\" agenda-group \"B to C\" auto-focus true when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"B to D\" ).setFocus(); end",
"rule \"B to D\" agenda-group \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end",
"A finished B finished C finished D finished",
"declare type State @propertyChangeSupport end",
"public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( \"state\", oldState, newState ); }",
"public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here }",
"recurse for 50 recurse for 49 recurse for 48 recurse for 47 recurse for 5 recurse for 4 recurse for 3 recurse for 2 1 == 1 2 == 1 3 == 2 4 == 3 5 == 5 6 == 8 47 == 2971215073 48 == 4807526976 49 == 7778742049 50 == 12586269025",
"ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules();",
"rule \"Recurse\" salience 10 when f : Fibonacci ( value == -1 ) not ( Fibonacci ( sequence == 1 ) ) then insert( new Fibonacci( f.sequence - 1 ) ); System.out.println( \"recurse for \" + f.sequence ); end",
"rule \"Bootstrap\" when f : Fibonacci( sequence == 1 || == 2, value == -1 ) // multi-restriction then modify ( f ){ value = 1 }; System.out.println( f.sequence + \" == \" + f.value ); end",
"rule \"Calculate\" when // Bind f1 and s1. f1 : Fibonacci( s1 : sequence, value != -1 ) // Bind f2 and v2, refer to bound variable s1. f2 : Fibonacci( sequence == (s1 + 1), v2 : value != -1 ) // Bind f3 and s3, alternative reference of f2.sequence. f3 : Fibonacci( s3 : sequence == (f2.sequence + 1 ), value == -1 ) then // Note the various referencing techniques. modify ( f3 ) { value = f1.value + v2 }; System.out.println( s3 + \" == \" + f3.value ); end",
"Cheapest possible BASE PRICE IS: 120 DISCOUNT IS: 20",
"template header age[] profile priorClaims policyType base reason package org.drools.examples.decisiontable; template \"Pricing bracket\" age policyType base rule \"Pricing bracket_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1} , priorClaims == \"@{priorClaims}\" , locationRiskProfile == \"@{profile}\" ) policy: Policy(type == \"@{policyType}\") then policy.setBasePrice(@{base}); System.out.println(\"@{reason}\"); end end template",
"template header age[] priorClaims policyType discount package org.drools.examples.decisiontable; template \"discounts\" age priorClaims policyType discount rule \"Discounts_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1}, priorClaims == \"@{priorClaims}\") policy: Policy(type == \"@{policyType}\") then policy.applyDiscount(@{discount}); end end template",
"<kbase name=\"DecisionTableKB\" packages=\"org.drools.examples.decisiontable\"> <ksession name=\"DecisionTableKS\" type=\"stateless\"/> </kbase> <kbase name=\"DTableWithTemplateKB\" packages=\"org.drools.examples.decisiontable-template\"> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/BasePricing.drt\" row=\"3\" col=\"3\"/> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/PromotionalPricing.drt\" row=\"18\" col=\"3\"/> <ksession name=\"DTableWithTemplateKS\"/> </kbase>",
"DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( \"ExamplePolicyPricing.xls\", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration );",
"// KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( \"Gold Fish\", 5 ) ); stock.add( new Product( \"Fish Tank\", 25 ) ); stock.add( new Product( \"Fish Food\", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI();",
"public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession(\"PetStoreKS\"); ksession.setGlobal( \"frame\", frame ); ksession.setGlobal( \"textArea\", this.output ); ksession.insert( new Product( \"Gold Fish\", 5 ) ); ksession.insert( new Product( \"Fish Tank\", 25 ) ); ksession.insert( new Product( \"Fish Food\", 2 ) ); ksession.insert( new Product( \"Fish Food Sample\", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); }",
"package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea",
"function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to checkout?\", \"\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( \"checkout\" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to buy a tank for your \" + total + \" fish?\", \"Purchase Suggestion\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( \"SUGGESTION: Would you like to buy a tank for your \" + total + \" fish? - \" ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( \"Yes\" ); } else { System.out.println( \"No\" ); } return true; }",
"// Insert each item in the shopping cart into the working memory. rule \"Explode Cart\" agenda-group \"init\" auto-focus true salience 10 when USDorder : Order( grossTotal == -1 ) USDitem : Purchase() from USDorder.items then insert( USDitem ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"show items\" ).setFocus(); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"evaluate\" ).setFocus(); end",
"rule \"Show Items\" agenda-group \"show items\" when USDorder : Order() USDp : Purchase( order == USDorder ) then textArea.append( USDp.product + \"\\n\"); end",
"// Free fish food sample when users buy a goldfish if they did not already buy // fish food and do not already have a fish food sample. rule \"Free Fish Food Sample\" agenda-group \"evaluate\" 1 when USDorder : Order() not ( USDp : Product( name == \"Fish Food\") && Purchase( product == USDp ) ) 2 not ( USDp : Product( name == \"Fish Food Sample\") && Purchase( product == USDp ) ) 3 exists ( USDp : Product( name == \"Gold Fish\") && Purchase( product == USDp ) ) 4 USDfishFoodSample : Product( name == \"Fish Food Sample\" ); then System.out.println( \"Adding free Fish Food Sample to cart\" ); purchase = new Purchase(USDorder, USDfishFoodSample); insert( purchase ); USDorder.addItem( purchase ); end",
"// Suggest a fish tank if users buy more than five goldfish and // do not already have a tank. rule \"Suggest Tank\" agenda-group \"evaluate\" when USDorder : Order() not ( USDp : Product( name == \"Fish Tank\") && Purchase( product == USDp ) ) 1 ArrayList( USDtotal : size > 5 ) from collect( Purchase( product.name == \"Gold Fish\" ) ) 2 USDfishTank : Product( name == \"Fish Tank\" ) then requireTank(frame, kcontext.getKieRuntime(), USDorder, USDfishTank, USDtotal); end",
"rule \"do checkout\" when then doCheckout(frame, kcontext.getKieRuntime()); end",
"rule \"Gross Total\" agenda-group \"checkout\" when USDorder : Order( grossTotal == -1) Number( total : doubleValue ) from accumulate( Purchase( USDprice : product.price ), sum( USDprice ) ) then modify( USDorder ) { grossTotal = total } textArea.append( \"\\ngross total=\" + total + \"\\n\" ); end rule \"Apply 5% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 10 && < 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.95; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end rule \"Apply 10% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.90; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end",
"Adding free Fish Food Sample to cart SUGGESTION: Would you like to buy a tank for your 6 fish? - Yes",
"public class Politician { private String name; private boolean honest; }",
"public class Hope { public Hope() { } }",
"rule \"We have an honest Politician\" salience 10 when exists( Politician( honest == true ) ) then insertLogical( new Hope() ); end",
"rule \"Hope Lives\" salience 10 when exists( Hope() ) then System.out.println(\"Hurrah!!! Democracy Lives\"); end",
"rule \"Corrupt the Honest\" when politician : Politician( honest == true ) exists( Hope() ) then System.out.println( \"I'm an evil corporation and I have corrupted \" + politician.getName() ); modify ( politician ) { honest = false }; end",
"rule \"Hope is Dead\" when not( Hope() ) then System.out.println( \"We are all Doomed!!! Democracy is Dead\" ); end",
"public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); }",
"Hurrah!!! Democracy Lives I'm an evil corporation and I have corrupted President of Umpa Lumpa I'm an evil corporation and I have corrupted Prime Minster of Cheeseland I'm an evil corporation and I have corrupted Tsar of Pringapopaloo I'm an evil corporation and I have corrupted Omnipotence Om We are all Doomed!!! Democracy is Dead",
"package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, \"./target/honestpolitician\" ); 6 ksession.fireAllRules(); ksession.dispose(); } }",
"single 8 at [0,1] column elimination due to [1,2]: remove 9 from [4,2] hidden single 9 at [1,2] row elimination due to [2,8]: remove 7 from [2,4] remove 6 from [3,8] due to naked pair at [3,2] and [3,7] hidden pair in row at [4,6] and [4,4]",
"Col: 0 Col: 1 Col: 2 Col: 3 Col: 4 Col: 5 Col: 6 Col: 7 Col: 8 Row 0: 123456789 --- 5 --- --- 6 --- --- 8 --- 123456789 --- 1 --- --- 9 --- --- 4 --- 123456789 Row 1: --- 9 --- 123456789 123456789 --- 6 --- 123456789 --- 5 --- 123456789 123456789 --- 3 --- Row 2: --- 7 --- 123456789 123456789 --- 4 --- --- 9 --- --- 3 --- 123456789 123456789 --- 8 --- Row 3: --- 8 --- --- 9 --- --- 7 --- 123456789 --- 4 --- 123456789 --- 6 --- --- 3 --- --- 5 --- Row 4: 123456789 123456789 --- 3 --- --- 9 --- 123456789 --- 6 --- --- 8 --- 123456789 123456789 Row 5: --- 4 --- --- 6 --- --- 5 --- 123456789 --- 8 --- 123456789 --- 2 --- --- 9 --- --- 1 --- Row 6: --- 5 --- 123456789 123456789 --- 2 --- --- 6 --- --- 9 --- 123456789 123456789 --- 7 --- Row 7: --- 6 --- 123456789 123456789 --- 5 --- 123456789 --- 4 --- 123456789 123456789 --- 9 --- Row 8: 123456789 --- 4 --- --- 9 --- --- 7 --- 123456789 --- 8 --- --- 3 --- --- 5 --- 123456789",
"cell [0,8]: 5 has a duplicate in row 0 cell [0,0]: 5 has a duplicate in row 0 cell [6,0]: 8 has a duplicate in col 0 cell [4,0]: 8 has a duplicate in col 0 Validation complete.",
"Validation complete. Sorry - can't solve this grid.",
"rule \"duplicate in cell row\" when USDc: Cell( USDv: value != null ) USDcr: CellRow( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellRow == USDcr ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in row \" + USDcr.getNumber() ); end rule \"duplicate in cell col\" when USDc: Cell( USDv: value != null ) USDcc: CellCol( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellCol == USDcc ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in col \" + USDcc.getNumber() ); end rule \"duplicate in cell sqr\" when USDc: Cell( USDv: value != null ) USDcs: CellSqr( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellSqr == USDcs ) then System.out.println( \"cell \" + USDc.toString() + \" has duplicate in its square of nine.\" ); end",
"rule \"terminate group\" salience -100 when then System.out.println( \"Validation complete.\" ); drools.halt(); end",
"// A Setting object is inserted to define the value of a Cell. // Rule for updating the cell and all cell groups that contain it rule \"set a value\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // A matching Cell, with no value set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == null, USDcr: cellRow, USDcc: cellCol, USDcs: cellSqr ) // Count down USDctr: Counter( USDcount: count ) then // Modify the Cell by setting its value. modify( USDc ){ setValue( USDv ) } // System.out.println( \"set cell \" + USDc.toString() ); modify( USDcr ){ blockValue( USDv ) } modify( USDcc ){ blockValue( USDv ) } modify( USDcs ){ blockValue( USDv ) } modify( USDctr ){ setCount( USDcount - 1 ) } end // Rule for removing a value from all cells that are siblings // in one of the three cell groups rule \"eliminate a value from Cell\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set Cell( rowNo == USDrn, colNo == USDcn, value == USDv, USDexCells: exCells ) // For all Cells that are associated with the updated cell USDc: Cell( free contains USDv ) from USDexCells then // System.out.println( \"clear \" + USDv + \" from cell \" + USDc.posAsString() ); // Modify a related Cell by blocking the assigned value. modify( USDc ){ blockValue( USDv ) } end // Rule for eliminating the Setting fact rule \"retract setting\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == USDv ) // This is the negation of the last pattern in the previous rule. // Now the Setting fact can be safely retracted. not( USDx: Cell( free contains USDv ) and Cell( this == USDc, exCells contains USDx ) ) then // System.out.println( \"done setting cell \" + USDc.toString() ); // Discard the Setter fact. delete( USDs ); // Sudoku.sudoku.consistencyCheck(); end",
"// Detect a set of candidate values with cardinality 1 for some Cell. // This is the value to be set. rule \"single\" when // Currently no setting underway not Setting() // One element in the \"free\" set USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount == 1 ) then Integer i = USDc.getFreeValue(); if (explain) System.out.println( \"single \" + i + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, i ) ); end // Detect a set of candidate values with a value that is the only one // in one of its groups. This is the value to be set. rule \"hidden single\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Some integer USDi: Integer() // The \"free\" set contains this number USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount > 1, free contains USDi ) // A cell group contains this cell USDc. USDcg: CellGroup( cells contains USDc ) // No other cell from that group contains USDi. not ( Cell( this != USDc, free contains USDi ) from USDcg.getCells() ) then if (explain) System.out.println( \"hidden single \" + USDi + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, USDi ) ); end",
"// A \"naked pair\" is two cells in some cell group with their sets of // permissible values being equal with cardinality 2. These two values // can be removed from all other candidate lists in the group. rule \"naked pair\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // One cell with two candidates USDc1: Cell( freeCount == 2, USDf1: free, USDr1: cellRow, USDrn1: rowNo, USDcn1: colNo, USDb1: cellSqr ) // The containing cell group USDcg: CellGroup( freeCount > 2, cells contains USDc1 ) // Another cell with two candidates, not the one we already have USDc2: Cell( this != USDc1, free == USDf1 /*** , rowNo >= USDrn1, colNo >= USDcn1 ***/ ) from USDcg.cells // Get one of the \"naked pair\". Integer( USDv: intValue ) from USDc1.getFree() // Get some other cell with a candidate equal to one from the pair. USDc3: Cell( this != USDc1 && != USDc2, freeCount > 1, free contains USDv ) from USDcg.cells then if (explain) System.out.println( \"remove \" + USDv + \" from \" + USDc3.posAsString() + \" due to naked pair at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Remove the value. modify( USDc3 ){ blockValue( USDv ) } end",
"// If two cells within the same cell group contain candidate sets with more than // two values, with two values being in both of them but in none of the other // cells, then we have a \"hidden pair\". We can remove all other candidates from // these two cells. rule \"hidden pair in row\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Establish a pair of Integer facts. USDi1: Integer() USDi2: Integer( this > USDi1 ) // Look for a Cell with these two among its candidates. (The upper bound on // the number of candidates avoids a lot of useless work during startup.) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellRow: cellRow ) // Get another one from the same row, with the same pair among its candidates. USDc2: Cell( this != USDc1, cellRow == USDcellRow, freeCount > 2, free contains USDi1 && contains USDi2 ) // Ascertain that no other cell in the group has one of these two values. not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellRow.getCells() ) then if( explain) System.out.println( \"hidden pair in row at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Set the candidate lists of these two Cells to the \"hidden pair\". modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in column\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellCol: cellCol ) USDc2: Cell( this != USDc1, cellCol == USDcellCol, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellCol.getCells() ) then if (explain) System.out.println( \"hidden pair in column at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in square\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellSqr: cellSqr ) USDc2: Cell( this != USDc1, cellSqr == USDcellSqr, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellSqr.getCells() ) then if (explain) System.out.println( \"hidden pair in square \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end",
"rule \"X-wings in rows\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDra: cellRow, USDrano: rowNo, USDc1: cellCol, USDc1no: colNo ) USDcb1: Cell( freeCount > 1, free contains USDi, USDrb: cellRow, USDrbno: rowNo > USDrano, cellCol == USDc1 ) not( Cell( this != USDca1 && != USDcb1, free contains USDi ) from USDc1.getCells() ) USDca2: Cell( freeCount > 1, free contains USDi, cellRow == USDra, USDc2: cellCol, USDc2no: colNo > USDc1no ) USDcb2: Cell( freeCount > 1, free contains USDi, cellRow == USDrb, cellCol == USDc2 ) not( Cell( this != USDca2 && != USDcb2, free contains USDi ) from USDc2.getCells() ) USDcx: Cell( rowNo == USDrano || == USDrbno, colNo != USDc1no && != USDc2no, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in rows \" + USDca1.posAsString() + \" - \" + USDcb1.posAsString() + USDca2.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"X-wings in columns\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDc1: cellCol, USDc1no: colNo, USDra: cellRow, USDrano: rowNo ) USDca2: Cell( freeCount > 1, free contains USDi, USDc2: cellCol, USDc2no: colNo > USDc1no, cellRow == USDra ) not( Cell( this != USDca1 && != USDca2, free contains USDi ) from USDra.getCells() ) USDcb1: Cell( freeCount > 1, free contains USDi, cellCol == USDc1, USDrb: cellRow, USDrbno: rowNo > USDrano ) USDcb2: Cell( freeCount > 1, free contains USDi, cellCol == USDc2, cellRow == USDrb ) not( Cell( this != USDcb1 && != USDcb2, free contains USDi ) from USDrb.getCells() ) USDcx: Cell( colNo == USDc1no || == USDc2no, rowNo != USDrano && != USDrbno, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in columns \" + USDca1.posAsString() + \" - \" + USDca2.posAsString() + USDcb1.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end",
"rule \"intersection removal column\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcc: cellCol ) // Does not occur in another cell of the same square and a different column not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellCol != USDcc ) // A cell exists in the same column and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellCol == USDcc, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"column elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"intersection removal row\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcr: cellRow ) // Does not occur in another cell of the same square and a different row. not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellRow != USDcr ) // A cell exists in the same row and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellRow == USDcr, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"row elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end",
"rule \"register north east\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthEast : Cell( row == (USDrow - 1), col == ( USDcol + 1 ) ) then insert( new Neighbor( USDcell, USDnorthEast ) ); insert( new Neighbor( USDnorthEast, USDcell ) ); end rule \"register north\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorth : Cell( row == (USDrow - 1), col == USDcol ) then insert( new Neighbor( USDcell, USDnorth ) ); insert( new Neighbor( USDnorth, USDcell ) ); end rule \"register north west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthWest : Cell( row == (USDrow - 1), col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDnorthWest ) ); insert( new Neighbor( USDnorthWest, USDcell ) ); end rule \"register west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDwest : Cell( row == USDrow, col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDwest ) ); insert( new Neighbor( USDwest, USDcell ) ); end",
"rule \"Kill The Lonely\" ruleflow-group \"evaluate\" no-loop when // A live cell has fewer than 2 live neighbors. theCell: Cell( liveNeighbors < 2, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Kill The Overcrowded\" ruleflow-group \"evaluate\" no-loop when // A live cell has more than 3 live neighbors. theCell: Cell( liveNeighbors > 3, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Give Birth\" ruleflow-group \"evaluate\" no-loop when // A dead cell has 3 live neighbors. theCell: Cell( liveNeighbors == 3, cellState == CellState.DEAD, phase == Phase.EVALUATE ) then modify( theCell ){ theCell.setPhase( Phase.BIRTH ); } end",
"rule \"reset calculate\" ruleflow-group \"reset calculate\" when then WorkingMemory wm = drools.getWorkingMemory(); wm.clearRuleFlowGroup( \"calculate\" ); end rule \"kill\" ruleflow-group \"kill\" no-loop when theCell: Cell( phase == Phase.KILL ) then modify( theCell ){ setCellState( CellState.DEAD ), setPhase( Phase.DONE ); } end rule \"birth\" ruleflow-group \"birth\" no-loop when theCell: Cell( phase == Phase.BIRTH ) then modify( theCell ){ setCellState( CellState.LIVE ), setPhase( Phase.DONE ); } end",
"rule \"Calculate Live\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.LIVE ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() + 1 ), setPhase( Phase.EVALUATE ); } end rule \"Calculate Dead\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.DEAD ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() - 1 ), setPhase( Phase.EVALUATE ); } end",
"ksession.insert( new Location(\"Office\", \"House\") ); ksession.insert( new Location(\"Kitchen\", \"House\") ); ksession.insert( new Location(\"Knife\", \"Kitchen\") ); ksession.insert( new Location(\"Cheese\", \"Kitchen\") ); ksession.insert( new Location(\"Desk\", \"Office\") ); ksession.insert( new Location(\"Chair\", \"Office\") ); ksession.insert( new Location(\"Computer\", \"Desk\") ); ksession.insert( new Location(\"Drawer\", \"Desk\") );",
"go1 Office is in the House --- go2 Drawer is in the House --- go3 --- Key is in the Office --- go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office --- go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk",
"query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end",
"rule \"go\" salience 10 when USDs : String() then System.out.println( USDs ); end rule \"go1\" when String( this == \"go1\" ) isContainedIn(\"Office\", \"House\"; ) then System.out.println( \"Office is in the House\" ); end",
"ksession.insert( \"go1\" ); ksession.fireAllRules();",
"go1 Office is in the House",
"rule \"go2\" when String( this == \"go2\" ) isContainedIn(\"Drawer\", \"House\"; ) then System.out.println( \"Drawer is in the House\" ); end",
"ksession.insert( \"go2\" ); ksession.fireAllRules();",
"go2 Drawer is in the House",
"isContainedIn(x==drawer, z==desk)",
"Location(x==drawer, y==desk)",
"rule \"go3\" when String( this == \"go3\" ) isContainedIn(\"Key\", \"Office\"; ) then System.out.println( \"Key is in the Office\" ); end",
"ksession.insert( \"go3\" ); ksession.fireAllRules();",
"go3",
"ksession.insert( new Location(\"Key\", \"Drawer\") ); ksession.fireAllRules();",
"Key is in the Office",
"rule \"go4\" when String( this == \"go4\" ) isContainedIn(thing, \"Office\"; ) then System.out.println( thing + \"is in the Office\" ); end",
"ksession.insert( \"go4\" ); ksession.fireAllRules();",
"go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office",
"rule \"go5\" when String( this == \"go5\" ) isContainedIn(thing, location; ) then System.out.println(thing + \" is in \" + location ); end",
"ksession.insert( \"go5\" ); ksession.fireAllRules();",
"go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/decision-examples-ide-con_decision-engine |
Introduction | Introduction Welcome to the Security Guide ! The Security Guide is designed to assist users of Red Hat Enterprise Linux in learning the processes and practices of securing workstations and servers against local and remote intrusion, exploitation, and malicious activity. The Security Guide details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. With proper administrative knowledge, vigilance, and tools, systems running Red Hat Enterprise Linux can be both fully functional and secured from most common intrusion and exploit methods. This guide discusses several security-related topics in great detail, including: Firewalls Encryption Securing Critical Services Virtual Private Networks Intrusion Detection The manual is divided into the following parts: General Introduction to Security Configuring Red Hat Enterprise Linux for Security Assessing Your Security Intrusions and Incident Response Appendix We would like to thank Thomas Rude for his generous contributions to this manual. He wrote the Vulnerability Assessments and Incident Response chapters. Thanks, Thomas! This manual assumes that you have an advanced knowledge of Red Hat Enterprise Linux. If you are a new user or only have basic to intermediate knowledge of Red Hat Enterprise Linux and need more information on using the system, refer to the following guides which discuss the fundamental aspects of Red Hat Enterprise Linux in greater detail than the Security Guide : The Installation Guide provides information regarding installation. The Red Hat Enterprise Linux Introduction to System Adminitration contains introductory information for new Red Hat Enterprise Linux system administrators. The System Administrators Guide offers detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. This guide includes some services that are discussed (from a security standpoint) in the Security Guide . Reference Guide provides detailed information suited for more experienced users to refer to when needed, as opposed to step-by-step instructions. 1. More to Come The Security Guide is part of Red Hat's growing commitment to provide useful and timely support and information to Red Hat Enterprise Linux users. As new tools and security methodologies are released, this guide will be expanded to include them. 1.1. Send in Your Feedback If you spot a typo in the Security Guide , or if you have thought of a way to make this manual better, we would love to hear from you! Submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rhel-sg . Be sure to mention the manual's identifier: By mentioning the identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, include the section number and some of the surrounding text so we can find it easily. | [
"rhel-sg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-intro |
Deploying OpenShift Data Foundation on any platform | Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.14 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/index |
Chapter 3. Refining advisor service recommendations | Chapter 3. Refining advisor service recommendations The advisor service puts a lot of information at your fingertips, especially when Red Hat Insights for Red Hat Enterprise Linux is deployed on a large Red Hat Enterprise Linux infrastructure. There are several ways to refine advisor recommendations to help you focus on the issues and systems that matter the most. This section describes the multiple options for filtering, sorting, and excluding specific recommendations from your advisor results. 3.1. Viewing all advisor-service recommendations When you first enter the advisor service recommendations view, you see the default view and results of Systems Impacted (set to 1 or more systems) and Status (set to Enabled) filters being applied. To get a comprehensive view of all recommendations, including those not impacting your systems and those in the advisor database, use the following procedure: Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Procedure Navigate to the Operations > Advisor > Recommendations page. Click the close icon to the Systems Impacted and Status filters. You can now browse through all of the potential recommendations for your systems. Optionally, return to the default recommendations view that shows 1 or more of Systems impacted and the Status set to Enabled , by clicking Reset filters . 3.2. Filtering advisor-service recommendations Select from the following filters to refine your recommendations list: Name. In the subfilter field, start typing the recommendation description or a keyword and select from the options presented. Total risk. In the subfilter field, select from one or more: Critical, Important, Moderate, or Low. Risk of change. In the subfilter field, select from High, Moderate, Low, or Very low. Impact. In the subfilter field, select from Critical, Important, Moderate, or Low. Likelihood. In the subfilter field, select from Critical, Important, Moderate, or Low. Category. In the subfilter field, select from Availability, Performance, Stability, or Security. Incidents. In the subfilter field, select to show recommendations with or without incidents having occurred. Remediation. In the subfilter field, select Ansible playbook or Manual for the remediation method. Reboot required. In the subfilter field, select either Required or Not required. Ansible support. In the subfilter field, select to show recommendations with or without Ansible Playbook support. Status. In the subfilter field, select from All, Enabled, Disabled, Red Hat disabled. Systems impacted. In the subfilter field, select either 1 or more or None. To set filters, complete the following steps. Procedure Navigate to the Operations > Advisor > Recommendations page and log in if necessary. Click the filter icon and select a filter category from the dropdown list. Click the dropdown arrow in the subfilter menu and check a box (or boxes) to activate a subfilter or, in the case of Description, begin typing the name or description of a recommendation. 3.3. Recommendations table columns and sorting Sort columns in the advisor recommendations table using the following parameters: Name. Alphabetize by A to Z or Z to A. Modified. Order by number of days since the recommendation was last modified or published, from newest or oldest. Total risk. View in order of criticality. Systems. View by the number of your systems that are impacted. Remediation. Sort by recommendations that have or do not have Ansible Playbook support. 3.4. Disabling an advisor-service recommendation Prerequisite You are logged into the Red Hat Hybrid Cloud Console and have RHEL Advisor administrator access. Procedure Disable specific recommendations affecting your systems so that they no longer appear in your results. To disable a recommendation, complete the following steps: Navigate to the Operations > Advisor > Recommendations page and log in if necessary. Locate the recommendation to disable. Click the More-options icon ( ) at the right end of the row and then click Disable recommendation . 3.4.1. Viewing and enabling a previously disabled recommendation Prerequisite You are logged into the Red Hat Hybrid Cloud Console and have RHEL Advisor administrator access. Procedure When a recommendation is disabled, you will no longer see the recommendation in your advisor results. To reverse this action, complete the following steps: Navigate to the Operations > Advisor > Recommendations page and log in if necessary. Click the Filter dropdown and select Status . In the subfilter dropdown list, select Disabled . Locate the recommendation to enable. Click the more-actions icon, , on the right side of the row, and click Enable recommendation . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service/assembly-adv-assess-refining-recommendations |
Chapter 10. availability | Chapter 10. availability This chapter describes the commands under the availability command. 10.1. availability zone list List availability zones and their status Usage: Table 10.1. Command arguments Value Summary -h, --help Show this help message and exit --compute List compute availability zones --network List network availability zones --volume List volume availability zones --long List additional fields in output Table 10.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 10.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 10.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 10.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack availability zone list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--compute] [--network] [--volume] [--long]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/availability |
4.7. Configuring Fencing for Cluster Members | 4.7. Configuring Fencing for Cluster Members Once you have completed the initial steps of creating a cluster and creating fence devices, you need to configure fencing for the cluster nodes by following the steps in this section. Note that you must configure fencing for each node in the cluster. The following sections provide procedures for configuring a single fence device for a node, configuring a node with a backup fence device, and configuring a node with redundant power supplies: Section 4.7.1, "Configuring a Single Fence Device for a Node" Section 4.7.2, "Configuring a Backup Fence Device" Section 4.7.3, "Configuring a Node with Redundant Power" 4.7.1. Configuring a Single Fence Device for a Node Use the following procedure to configure a node with a single fence device. From the cluster-specific page, you can configure fencing for the nodes in the cluster by clicking on Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page. Click on a node name. Clicking a link for a node causes a page to be displayed for that link showing how that node is configured. The node-specific page displays any services that are currently running on the node, as well as any failover domains of which this node is a member. You can modify an existing failover domain by clicking on its name. For information on configuring failover domains, see Section 4.8, "Configuring a Failover Domain" . On the node-specific page, under Fence Devices , click Add Fence Method . This displays the Add Fence Method to Node dialog box. Enter a Method Name for the fencing method that you are configuring for this node. This is an arbitrary name that will be used by Red Hat High Availability Add-On; it is not the same as the DNS name for the device. Click Submit . This displays the node-specific screen that now displays the method you have just added under Fence Devices . Configure a fence instance for this method by clicking the Add Fence Instance button that appears beneath the fence method. This displays the Add Fence Device (Instance) drop-down menu from which you can select a fence device you have previously configured, as described in Section 4.6.1, "Creating a Fence Device" . Select a fence device for this method. If this fence device requires that you configure node-specific parameters, the display shows the parameters to configure. For information on fencing parameters, see Appendix A, Fence Device Parameters . Note For non-power fence methods (that is, SAN/storage fencing), Unfencing is selected by default on the node-specific parameters display. This ensures that a fenced node's access to storage is not re-enabled until the node has been rebooted. When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For information on unfencing a node, see the fence_node (8) man page. Click Submit . This returns you to the node-specific screen with the fence method and fence instance displayed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-member-conga-CA |
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent | Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent Agents in Red Hat Enterprise Linux such as the QEMU guest agent and the SPICE agent can be deployed to help the virtualization tools run more optimally on your system. These agents are described in this chapter. Note To further optimize and tune host and guest performance, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . 11.1. QEMU Guest Agent The QEMU guest agent runs inside the guest and allows the host machine to issue commands to the guest operating system using libvirt, helping with functions such as freezing and thawing filesystems. The guest operating system then responds to those commands asynchronously. The QEMU guest agent package, qemu-guest-agent , is installed by default in Red Hat Enterprise Linux 7. This section covers the libvirt commands and options available to the guest agent. Important Note that it is only safe to rely on the QEMU guest agent when run by trusted guests. An untrusted guest may maliciously ignore or abuse the guest agent protocol, and although built-in safeguards exist to prevent a denial of service attack on the host, the host requires guest co-operation for operations to run as expected. Note that QEMU guest agent can be used to enable and disable virtual CPUs (vCPUs) while the guest is running, thus adjusting the number of vCPUs without using the hot plug and hot unplug features. For more information, see Section 20.36.6, "Configuring Virtual CPU Count" . 11.1.1. Setting up Communication between the QEMU Guest Agent and Host The host machine communicates with the QEMU guest agent through a VirtIO serial connection between the host and guest machines. A VirtIO serial channel is connected to the host via a character device driver (typically a Unix socket), and the guest listens on this serial channel. Note The qemu-guest-agent does not detect if the host is listening to the VirtIO serial channel. However, as the current use for this channel is to listen for host-to-guest events, the probability of a guest virtual machine running into problems by writing to the channel with no listener is very low. Additionally, the qemu-guest-agent protocol includes synchronization markers that allow the host physical machine to force a guest virtual machine back into sync when issuing a command, and libvirt already uses these markers, so that guest virtual machines are able to safely discard any earlier pending undelivered responses. 11.1.1.1. Configuring the QEMU Guest Agent on a Linux Guest The QEMU guest agent can be configured on a running or shut down virtual machine. If configured on a running guest, the guest will start using the guest agent immediately. If the guest is shut down, the QEMU guest agent will be enabled at boot. Either virsh or virt-manager can be used to configure communication between the guest and the QEMU guest agent. The following instructions describe how to configure the QEMU guest agent on a Linux guest. Procedure 11.1. Setting up communication between guest agent and host with virsh on a shut down Linux guest Shut down the virtual machine Ensure the virtual machine (named rhel7 in this example) is shut down before configuring the QEMU guest agent: Add the QEMU guest agent channel to the guest XML configuration Edit the guest's XML file to add the QEMU guest agent details: Add the following to the guest's XML file and save the changes: <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Start the virtual machine Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Alternatively, the QEMU guest agent can be configured on a running guest with the following steps: Procedure 11.2. Setting up communication between guest agent and host on a running Linux guest Create an XML file for the QEMU guest agent # cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Attach the QEMU guest agent to the virtual machine Attach the QEMU guest agent to the running virtual machine (named rhel7 in this example) with this command: Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Procedure 11.3. Setting up communication between the QEMU guest agent and host with virt-manager Shut down the virtual machine Ensure the virtual machine is shut down before configuring the QEMU guest agent. To shut down the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click the light switch icon from the menu bar. Add the QEMU guest agent channel to the guest Open the virtual machine's hardware details by clicking the lightbulb icon at the top of the guest window. Click the Add Hardware button to open the Add New Virtual Hardware window, and select Channel . Select the QEMU guest agent from the Name drop-down list and click Finish : Figure 11.1. Selecting the QEMU guest agent channel device Start the virtual machine To start the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click on the menu bar. Install the QEMU guest agent on the guest Open the guest with virt-manager and install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: The QEMU guest agent is now configured on the rhel7 virtual machine. | [
"virsh shutdown rhel7",
"virsh edit rhel7",
"<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>",
"virsh start rhel7",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent",
"cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>",
"virsh attach-device rhel7 agent.xml",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-QEMU_Guest_Agent |
Chapter 21. Set Up Java Management Extensions (JMX) | Chapter 21. Set Up Java Management Extensions (JMX) 21.1. About Java Management Extensions (JMX) Java Management Extension (JMX) is a Java based technology that provides tools to manage and monitor applications, devices, system objects, and service oriented networks. Each of these objects is managed, and monitored by MBeans . JMX is the de facto standard for middleware management and administration. As a result, JMX is used in Red Hat JBoss Data Grid to expose management and statistical information. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Set_Up_Java_Management_Extensions_JMX |
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] | Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.2. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.9. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.16. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1 |
9.2.3. Installing and Removing Packages (and Dependencies) | 9.2.3. Installing and Removing Packages (and Dependencies) With the two filters selected, Only available and Only end user files , search for the screen window manager for the command line and highlight the package. You now have access to some very useful information about it, including: a clickable link to the project homepage; the Yum package group it is found in, if any; the license of the package; a pointer to the GNOME menu location from where the application can be opened, if applicable; and the size of the package, which is relevant when we download and install it. Figure 9.7. Viewing and installing a package with PackageKit's Add/Remove Software window When the check box to a package or group is checked, then that item is already installed on the system. Checking an unchecked box causes it to be marked for installation, which only occurs when the Apply button is clicked. In this way, you can search for and select multiple packages or package groups before performing the actual installation transactions. Additionally, you can remove installed packages by unchecking the checked box, and the removal will occur along with any pending installations when Apply is pressed. Dependency resolution, which may add additional packages to be installed or removed, is performed after pressing Apply . PackageKit will then display a window listing those additional packages to install or remove, and ask for confirmation to proceed. Select screen and click the Apply button. You will then be prompted for the superuser password; enter it, and PackageKit will install screen . After finishing the installation, PackageKit sometimes presents you with a list of your newly-installed applications and offers you the choice of running them immediately. Alternatively, you will remember that finding a package and selecting it in the Add/Remove Software window shows you the Location of where in the GNOME menus its application shortcut is located, which is helpful when you want to run it. Once it is installed, you can run screen , a screen manager that allows you to have multiple logins on one terminal, by typing screen at a shell prompt. screen is a very useful utility, but we decide that we do not need it and we want to uninstall it. Remembering that we need to change the Only available filter we recently used to install it to Only installed in Filters Installed , we search for screen again and uncheck it. The program did not install any dependencies of its own; if it had, those would be automatically removed as well, as long as they were not also dependencies of any other packages still installed on our system. Warning Although PackageKit automatically resolves dependencies during package installation and removal, it is unable to remove a package without also removing packages which depend on it. This type of operation can only be performed by RPM , is not advised, and can potentially leave your system in a non-functioning state or cause applications to behave erratically and/or crash. Figure 9.8. Removing a package with PackageKit's Add/Remove Software window | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Installing_and_Removing_Packages_and_Dependencies |
Chapter 2. Understanding authentication | Chapter 2. Understanding authentication For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. As an administrator, you can configure authentication for OpenShift Container Platform. 2.1. Users A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Container Platform API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Container Platform OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . 2.3.1.2. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.3.1.3. Authentication metrics for Prometheus OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts: openshift_auth_basic_password_count counts the number of oc login user name and password attempts. openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error . openshift_auth_form_password_count counts the number of web console login attempts. openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error . openshift_auth_password_total counts the total number of oc login and web console login attempts. | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/understanding-authentication |
Chapter 3. Updating Red Hat build of OpenJDK container images | Chapter 3. Updating Red Hat build of OpenJDK container images To ensure that an Red Hat build of OpenJDK container with Java applications includes the latest security updates, rebuild the container. Procedure Pull the base Red Hat build of OpenJDK image. Deploy the Red Hat build of OpenJDK application. For more information, see Deploying Red Hat build of OpenJDK applications in containers . The Red Hat build of OpenJDK container with the Red Hat build of OpenJDK application is updated. Additional resources For more information, see Red Hat OpenJDK Container images . Revised on 2024-05-09 16:46:16 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/packaging_red_hat_build_of_openjdk_11_applications_in_containers/updating-openjdk-container-images |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_camel_quarkus_projects/pr01 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_connected_red_hat_satellite_to_6.16/providing-feedback-on-red-hat-documentation_upgrading-connected |
Chapter 5. Performing cross-site operations via JMX | Chapter 5. Performing cross-site operations via JMX Perform cross-site operations such as pushing state transfer and bringing sites online via JMX. 5.1. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" 5.2. Performing cross-site operations with JMX clients Perform cross-site operations with JMX clients. Prerequisites Configure Data Grid to register JMX MBeans Procedure Connect to Data Grid with any JMX client. Invoke operations from the following MBeans: XSiteAdmin provides cross-site operations for caches. GlobalXSiteAdminOperations provides cross-site operations for Cache Managers. For example, to bring sites back online, invoke bringSiteOnline(siteName) . Additional resources XSiteAdmin MBean GlobalXSiteAdminOperations MBean 5.3. JMX MBeans for cross-site replication Data Grid provides JMX MBeans for cross-site replication that let you gather statistics and perform remote operations. The org.infinispan:type=Cache component provides the following JMX MBeans: XSiteAdmin exposes cross-site operations that apply to specific cache instances. RpcManager provides statistics about network requests for cross-site replication. AsyncXSiteStatistics provides statistics for asynchronous cross-site replication, including queue size and number of conflicts. The org.infinispan:type=CacheManager component includes the following JMX MBean: GlobalXSiteAdminOperations exposes cross-site operations that apply to all caches in a cache container. For details about JMX MBeans along with descriptions of available operations and statistics, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components | [
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\""
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_cross-site_replication/cross-site-operations-jmx |
Operations Guide | Operations Guide Red Hat Ceph Storage 5 Operational tasks for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/index |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ JMS in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ JMS, you must install Apache Maven . To use AMQ JMS, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version>0.53.0.redhat-00001</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ JMS can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Clients 2.8.0 JMS Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-clients-2.8.0-jms-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.4. Installing the examples Procedure Use the git clone command to clone the source repository to a local directory named qpid-jms : USD git clone https://github.com/apache/qpid-jms.git qpid-jms Change to the qpid-jms directory and use the git checkout command to switch to the 0.53.0 branch: USD cd qpid-jms USD git checkout 0.53.0 The resulting local directory is referred to as <source-dir> throughout this document. | [
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version>0.53.0.redhat-00001</version> </dependency>",
"unzip amq-clients-2.8.0-jms-maven-repository.zip",
"git clone https://github.com/apache/qpid-jms.git qpid-jms",
"cd qpid-jms git checkout 0.53.0"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/installation |
Chapter 2. SELinux Contexts | Chapter 2. SELinux Contexts Processes and files are labeled with an SELinux context that contains additional information, such as an SELinux user, role, type, and, optionally, a level. When running SELinux, all of this information is used to make access control decisions. In Red Hat Enterprise Linux, SELinux provides a combination of Role-Based Access Control (RBAC), Type Enforcement (TE), and, optionally, Multi-Level Security (MLS). The following is an example showing SELinux context. SELinux contexts are used on processes, Linux users, and files, on Linux operating systems that run SELinux. Use the following command to view the SELinux context of files and directories: SELinux contexts follow the SELinux user:role:type:level syntax. The fields are as follows: SELinux user The SELinux user identity is an identity known to the policy that is authorized for a specific set of roles, and for a specific MLS/MCS range. Each Linux user is mapped to an SELinux user using SELinux policy. This allows Linux users to inherit the restrictions placed on SELinux users. The mapped SELinux user identity is used in the SELinux context for processes in that session, in order to define what roles and levels they can enter. Enter the following command as root to view a list of mappings between SELinux and Linux user accounts (you need to have the policycoreutils-python package installed): Output may differ slightly from system to system: The Login Name column lists Linux users. The SELinux User column lists which SELinux user the Linux user is mapped to. For processes, the SELinux user limits which roles and levels are accessible. The MLS/MCS Range column, is the level used by Multi-Level Security (MLS) and Multi-Category Security (MCS). The Service column determines the correct SELinux context, in which the Linux user is supposed to be logged in to the system. By default, the asterisk ( * ) character is used, which stands for any service. role Part of SELinux is the Role-Based Access Control (RBAC) security model. The role is an attribute of RBAC. SELinux users are authorized for roles, and roles are authorized for domains. The role serves as an intermediary between domains and SELinux users. The roles that can be entered determine which domains can be entered; ultimately, this controls which object types can be accessed. This helps reduce vulnerability to privilege escalation attacks. type The type is an attribute of Type Enforcement. The type defines a domain for processes, and a type for files. SELinux policy rules define how types can access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. level The level is an attribute of MLS and MCS. An MLS range is a pair of levels, written as lowlevel-highlevel if the levels differ, or lowlevel if the levels are identical ( s0-s0 is the same as s0 ). Each level is a sensitivity-category pair, with categories being optional. If there are categories, the level is written as sensitivity:category-set . If there are no categories, it is written as sensitivity . If the category set is a contiguous series, it can be abbreviated. For example, c0.c3 is the same as c0,c1,c2,c3 . The /etc/selinux/targeted/setrans.conf file maps levels ( s0:c0 ) to human-readable form (that is CompanyConfidential ). In Red Hat Enterprise Linux, targeted policy enforces MCS, and in MCS, there is just one sensitivity, s0 . MCS in Red Hat Enterprise Linux supports 1024 different categories: c0 through to c1023 . s0-s0:c0.c1023 is sensitivity s0 and authorized for all categories. MLS enforces the Bell-La Padula Mandatory Access Model, and is used in Labeled Security Protection Profile (LSPP) environments. To use MLS restrictions, install the selinux-policy-mls package, and configure MLS to be the default SELinux policy. The MLS policy shipped with Red Hat Enterprise Linux omits many program domains that were not part of the evaluated configuration, and therefore, MLS on a desktop workstation is unusable (no support for the X Window System); however, an MLS policy from the upstream SELinux Reference Policy can be built that includes all program domains. For more information on MLS configuration, see Section 4.13, "Multi-Level Security (MLS)" . 2.1. Domain Transitions A process in one domain transitions to another domain by executing an application that has the entrypoint type for the new domain. The entrypoint permission is used in SELinux policy and controls which applications can be used to enter a domain. The following example demonstrates a domain transition: Procedure 2.1. An Example of a Domain Transition A user wants to change their password. To do this, they run the passwd utility. The /usr/bin/passwd executable is labeled with the passwd_exec_t type: The passwd utility accesses /etc/shadow , which is labeled with the shadow_t type: An SELinux policy rule states that processes running in the passwd_t domain are allowed to read and write to files labeled with the shadow_t type. The shadow_t type is only applied to files that are required for a password change. This includes /etc/gshadow , /etc/shadow , and their backup files. An SELinux policy rule states that the passwd_t domain has its entrypoint permission set to the passwd_exec_t type. When a user runs the passwd utility, the user's shell process transitions to the passwd_t domain. With SELinux, since the default action is to deny, and a rule exists that allows (among other things) applications running in the passwd_t domain to access files labeled with the shadow_t type, the passwd application is allowed to access /etc/shadow , and update the user's password. This example is not exhaustive, and is used as a basic example to explain domain transition. Although there is an actual rule that allows subjects running in the passwd_t domain to access objects labeled with the shadow_t file type, other SELinux policy rules must be met before the subject can transition to a new domain. In this example, Type Enforcement ensures: The passwd_t domain can only be entered by executing an application labeled with the passwd_exec_t type; can only execute from authorized shared libraries, such as the lib_t type; and cannot execute any other applications. Only authorized domains, such as passwd_t , can write to files labeled with the shadow_t type. Even if other processes are running with superuser privileges, those processes cannot write to files labeled with the shadow_t type, as they are not running in the passwd_t domain. Only authorized domains can transition to the passwd_t domain. For example, the sendmail process running in the sendmail_t domain does not have a legitimate reason to execute passwd ; therefore, it can never transition to the passwd_t domain. Processes running in the passwd_t domain can only read and write to authorized types, such as files labeled with the etc_t or shadow_t types. This prevents the passwd application from being tricked into reading or writing arbitrary files. | [
"~]USD ls -Z file1 -rwxrw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *",
"~]USD ls -Z /usr/bin/passwd -rwsr-xr-x root root system_u:object_r:passwd_exec_t:s0 /usr/bin/passwd",
"~]USD ls -Z /etc/shadow -r--------. root root system_u:object_r:shadow_t:s0 /etc/shadow"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-selinux_contexts |
Chapter 2. About migrating from OpenShift Container Platform 3 to 4 | Chapter 2. About migrating from OpenShift Container Platform 3 to 4 OpenShift Container Platform 4 contains new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. OpenShift Container Platform 4 clusters are deployed and managed very differently from OpenShift Container Platform 3. The most effective way to migrate from OpenShift Container Platform 3 to 4 is by using a CI/CD pipeline to automate deployments in an application lifecycle management framework. If you do not have a CI/CD pipeline or if you are migrating stateful applications, you can use the Migration Toolkit for Containers (MTC) to migrate your application workloads. You can use Red Hat Advanced Cluster Management for Kubernetes to help you import and manage your OpenShift Container Platform 3 clusters easily, enforce policies, and redeploy your applications. Take advantage of the free subscription to use Red Hat Advanced Cluster Management to simplify your migration process. To successfully transition to OpenShift Container Platform 4, review the following information: Differences between OpenShift Container Platform 3 and 4 Architecture Installation and upgrade Storage, network, logging, security, and monitoring considerations About the Migration Toolkit for Containers Workflow File system and snapshot copy methods for persistent volumes (PVs) Direct volume migration Direct image migration Advanced migration options Automating your migration with migration hooks Using the MTC API Excluding resources from a migration plan Configuring the MigrationController custom resource for large-scale migrations Enabling automatic PV resizing for direct volume migration Enabling cached Kubernetes clients for improved performance For new features and enhancements, technical changes, and known issues, see the MTC release notes . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/about-migrating-from-3-to-4 |
4.251. python-sqlalchemy | 4.251. python-sqlalchemy 4.251.1. RHSA-2012:0369 - Moderate: python-sqlalchemy security update An updated python-sqlalchemy package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. SQLAlchemy is an Object Relational Mapper (ORM) that provides a flexible, high-level interface to SQL databases. Security Fix CVE-2012-0805 It was discovered that SQLAlchemy did not sanitize values for the limit and offset keywords for SQL select statements. If an application using SQLAlchemy accepted values for these keywords, and did not filter or sanitize them before passing them to SQLAlchemy, it could allow an attacker to perform an SQL injection attack against the application. All users of python-sqlalchemy are advised to upgrade to this updated package, which contains a patch to correct this issue. All running applications using SQLAlchemy must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-sqlalchemy |
Appendix B. S3 common request headers | Appendix B. S3 common request headers The following table lists the valid common request headers and their descriptions. Table B.1. Request Headers Request Header Description CONTENT_LENGTH Length of the request body. DATE Request time and date (in UTC). HOST The name of the host server. AUTHORIZATION Authorization token. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/s3-common-request-headers_dev |
Chapter 6. Deploying hosted control planes in a disconnected environment | Chapter 6. Deploying hosted control planes in a disconnected environment 6.1. Introduction to hosted control planes in a disconnected environment In the context of hosted control planes, a disconnected environment is an OpenShift Container Platform deployment that is not connected to the internet and that uses hosted control planes as a base. You can deploy hosted control planes in a disconnected environment on bare metal or OpenShift Virtualization. Hosted control planes in disconnected environments function differently than in standalone OpenShift Container Platform: The control plane is in the management cluster. The control plane is where the pods of the hosted control plane are run and managed by the Control Plane Operator. The data plane is in the workers of the hosted cluster. The data plane is where the workloads and other pods run, all managed by the HostedClusterConfig Operator. Depending on where the pods are running, they are affected by the ImageDigestMirrorSet (IDMS) or ImageContentSourcePolicy (ICSP) that is created in the management cluster or by the ImageContentSource that is set in the spec field of the manifest for the hosted cluster. The spec field is translated into an IDMS object on the hosted cluster. You can deploy hosted control planes in a disconnected environment on IPv4, IPv6, and dual-stack networks. IPv4 is one of the simplest network configurations to deploy hosted control planes in a disconnected environment. IPv4 ranges require fewer external components than IPv6 or dual-stack setups. For hosted control planes on OpenShift Virtualization in a disconnected environment, use either an IPv4 or a dual-stack network. Important Hosted control planes in a disconnected environment on a dual-stack network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.2. Deploying hosted control planes on OpenShift Virtualization in a disconnected environment When you deploy hosted control planes in a disconnected environment, some of the steps differ depending on the platform you use. The following procedures are specific to deployments on OpenShift Virtualization. 6.2.1. Prerequisites You have a disconnected OpenShift Container Platform environment serving as your management cluster. You have an internal registry to mirror images on. For more information, see About disconnected installation mirroring . 6.2.2. Configuring image mirroring for hosted control planes in a disconnected environment Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io , and storing them in your private registry. In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information: The OpenShift Container Platform versions to mirror. The versions are in quay.io . The additional Operators to mirror. Select packages individually. The extra images that you want to add to the repository. Prerequisites Ensure that the registry server is running before you start the mirroring process. Procedure To configure image mirroring, complete the following steps: Ensure that your USD{HOME}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-{product-version} minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5 1 2 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 3 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. 4 Images specified in the additionalImages field are examples only and are not strictly needed. 5 For deployments that use the KubeVirt provider, include this line. Start the mirroring process by entering the following command: USD oc-mirror --v2 --config imagesetconfig.yaml \ --workspace file://mirror-file docker://<registry> After the mirroring process is finished, you have a new folder named mirror-file , which contains the ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), and the catalog sources to apply on the hosted cluster. Mirror the nightly or CI versions of OpenShift Container Platform by configuring the imagesetconfig.yaml file as follows: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2 # ... 1 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 2 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. If you have a partially disconnected environment, mirror the images from the image set configuration to a registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --workspace file://<file_path> docker://<mirror_registry_url> --v2 For more information, see "Mirroring an image set in a partially disconnected environment". If you have a fully disconnected environment, perform the following steps: Mirror the images from the specified image set configuration to the disk by entering the following command: USD oc mirror -c imagesetconfig.yaml file://<file_path> --v2 For more information, see "Mirroring an image set in a fully disconnected environment". Process the image set file on the disk and mirror the contents to a target mirror registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --from file://<file_path> docker://<mirror_registry_url> --v2 Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks . Additional resources Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment 6.2.3. Applying objects in the management cluster After the mirroring process is complete, you need to apply two objects in the management cluster: ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS) Catalog sources When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/ . The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY , you need to apply the newly generated catalog sources. The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image. Procedure To check the new sources, run the following command by using the new CatalogSource as a source: USD oc get packagemanifest To apply the artifacts, complete the following steps: Create the ICSP or IDMS artifacts by entering the following command: USD oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml Wait for the nodes to become ready, and then enter the following command: USD oc apply -f catalogSource-XXXXXXXX-index.yaml Mirror the OLM catalogs and configure the hosted cluster to point to the mirror. When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2" , where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs: hypershift.openshift.io/certified-operators-catalog-image hypershift.openshift.io/community-operators-catalog-image hypershift.openshift.io/redhat-marketplace-catalog-image hypershift.openshift.io/redhat-operators-catalog-image In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates. steps Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes . Additional resources Mirroring images for a disconnected installation using the oc-mirror plugin . 6.2.4. Deploying multicluster engine Operator for a disconnected installation of hosted control planes The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it: About cluster lifecycle with multicluster engine operator Installing and upgrading multicluster engine operator 6.2.5. Configuring TLS certificates for a disconnected installation of hosted control planes To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. 6.2.5.1. Adding the registry CA to the management cluster To add the registry CA to the management cluster, complete the following steps. Procedure Create a config map that resembles the following example: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- 1 Specify the name of the config map. 2 Specify the namespace for the config map. 3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000 . 4 Ensure that the data in the config map is defined by using | only instead of other methods, such as | - . If you use other methods, issues can occur when the pod reads the certificates. Patch the cluster-wide object, image.config.openshift.io to include the following specification: spec: additionalTrustedCA: - name: registry-config As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments. The process to patch the object might take several minutes to be completed. 6.2.5.2. Adding the registry CA to the worker nodes for the hosted cluster In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes. Procedure In the hc.spec.additionalTrustBundle file, add the following specification: spec: additionalTrustBundle: - name: user-ca-bundle 1 1 The user-ca-bundle entry is a config map that you create in the step. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example: apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 1 Specify the namespace where the HostedCluster object is created. 6.2.6. Creating a hosted cluster on OpenShift Virtualization A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. 6.2.6.1. Requirements to deploy hosted control planes on OpenShift Virtualization As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information: Run the management cluster on bare metal. Each hosted cluster must have a cluster-wide unique name. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage". 6.2.6.2. Creating a hosted cluster with the KubeVirt platform by using the CLI To create a hosted cluster, you can use the hosted control plane command-line interface, hcp . Procedure Create a hosted cluster with the KubeVirt platform by entering the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <node_pool_replica_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --etcd-storage-class=<etcd_storage_class> 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the etcd storage class name, for example, lvm-storageclass . Note You can use the --release-image flag to set up the hosted cluster with a specific OpenShift Container Platform release. A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag. After a few moments, verify that the hosted control plane pods are running by entering the following command: USD oc -n clusters-<hosted-cluster-name> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned. To check the status of the hosted cluster, see the corresponding HostedCluster resource by entering the following command: USD oc get --namespace clusters hostedclusters See the following example output, which illustrates a fully provisioned HostedCluster object: Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 6.2.6.3. Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on. For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry: *.apps.mgmt-cluster.example.com As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OpenShift Container Platform cluster has the following default ingress: *.apps.guest.apps.mgmt-cluster.example.com Procedure For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes. You can configure this behavior by entering the following command: USD oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' Note When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior. 6.2.6.4. Customizing ingress and DNS behavior If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration. 6.2.6.4.1. Deploying a hosted cluster that specifies the base domain To create a hosted cluster that specifies a base domain, complete the following steps. Procedure Enter the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --base-domain <basedomain> 6 1 Specify the name of your hosted cluster. 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the base domain, for example, hypershift.lab . As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example, .apps.example.hypershift.lab . The hosted cluster remains in Partial status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer. View the status of your hosted cluster by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available Access the cluster by entering the following commands: USD hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. steps To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS". Note If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB". 6.2.6.4.2. Setting up the load balancer Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address. Procedure A NodePort service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports. Get the HTTP node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}' Note the HTTP node port value to use in the step. Get the HTTPS node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}' Note the HTTPS node port value to use in the step. Create the load balancer service by entering the following command: oc apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer 1 Specify the HTTPS node port value that you noted in the step. 2 Specify the HTTP node port value that you noted in the step. 6.2.6.4.3. Setting up a wildcard DNS Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service. Procedure Get the external IP address by entering the following command: USD oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}' Example output 192.168.20.30 Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry: *.apps.<hosted_cluster_name\>.<base_domain\>. The DNS entry must be able to route inside and outside of the cluster. DNS resolutions example dig +short test.apps.example.hypershift.lab 192.168.20.30 Check that hosted cluster status has moved from Partial to Completed by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 6.2.7. Finishing the deployment You can monitor the deployment of a hosted cluster from two perspectives: the control plane and the data plane. 6.2.7.1. Monitoring the control plane While the deployment proceeds, you can monitor the control plane by gathering information about the following artifacts: The HyperShift Operator The HostedControlPlane pod The bare metal hosts The agents The InfraEnv resource The HostedCluster and NodePool resources Procedure Enter the following commands to monitor the control plane: USD export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig USD watch "oc get pod -n hypershift;echo;echo;\ oc get pod -n clusters-hosted-ipv4;echo;echo;\ oc get bmh -A;echo;echo;\ oc get agent -A;echo;echo;\ oc get infraenv -A;echo;echo;\ oc get hostedcluster -A;echo;echo;\ oc get nodepool -A;echo;echo;" 6.2.7.2. Monitoring the data plane While the deployment proceeds, you can monitor the data plane by gathering information about the following artifacts: The cluster version The nodes, specifically, about whether the nodes joined the cluster The cluster Operators Procedure Enter the following commands: 6.3. Deploying hosted control planes on bare metal in a disconnected environment When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and multicluster engine for Kubernetes Operator work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see Enabling the central infrastructure management service . 6.3.1. Disconnected environment architecture for bare metal The following diagram illustrates an example architecture of a disconnected environment: Configure infrastructure services, including the registry certificate deployment with TLS support, web server, and DNS, to ensure that the disconnected deployment works. Create a config map in the openshift-config namespace. In this example, the config map is named registry-config . The content of the config map is the Registry CA certificate. The data field of the config map must contain the following key/value: Key: <registry_dns_domain_name>..<port> , for example, registry.hypershiftdomain.lab..5000: . Ensure that you place .. after the registry DNS domain name when you specify a port. Value: The certificate content For more information about creating a config map, see Configuring TLS certificates for a disconnected installation of hosted control planes . Modify the images.config.openshift.io custom resource (CR) specification and adds a new field named additionalTrustedCA with a value of name: registry-config . Create a config map that contains two data fields. One field contains the registries.conf file in RAW format, and the other field contains the Registry CA and is named ca-bundle.crt . The config map belongs to the multicluster-engine namespace, and the config map name is referenced in other objects. For an example of a config map, see the following sample configuration: apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- # ... -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4" [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ... In the multicluster engine Operator namespace, you create the multiclusterengine CR, which enables both the Agent and hypershift-addon add-ons. The multicluster engine Operator namespace must contain the config maps to modify behavior in a disconnected deployment. The namespace also contains the multicluster-engine , assisted-service , and hypershift-addon-manager pods. Create the following objects that are necessary to deploy the hosted cluster: Secrets: Secrets contain the pull secret, SSH key, and etcd encryption key. Config map: The config map contains the CA certificate of the private registry. HostedCluster : The HostedCluster resource defines the configuration of the cluster that the user intends to create. NodePool : The NodePool resource identifies the node pool that references the machines to use for the data plane. After you create the hosted cluster objects, the HyperShift Operator establishes the HostedControlPlane namespace to accommodate control plane pods. The namespace also hosts components such as Agents, bare metal hosts (BMHs), and the InfraEnv resource. Later, you create the InfraEnv resource, and after ISO creation, you create the BMHs and their secrets that contain baseboard management controller (BMC) credentials. The Metal3 Operator in the openshift-machine-api namespace inspects the new BMHs. Then, the Metal3 Operator tries to connect to the BMCs to start them by using the configured LiveISO and RootFS values that are specified through the AgentServiceConfig CR in the multicluster engine Operator namespace. After the worker nodes of the HostedCluster resource are started, an Agent container is started. This agent establishes contact with the Assisted Service, which orchestrates the actions to complete the deployment. Initially, you need to scale the NodePool resource to the number of worker nodes for the HostedCluster resource. The Assisted Service manages the remaining tasks. At this point, you wait for the deployment process to be completed. 6.3.2. Requirements to deploy hosted control planes on bare metal in a disconnected environment To configure hosted control planes in a disconnected environment, you must meet the following prerequisites: CPU: The number of CPUs provided determines how many hosted clusters can run concurrently. In general, use 16 CPUs for each node for 3 nodes. For minimal development, you can use 12 CPUs for each node for 3 nodes. Memory: The amount of RAM affects how many hosted clusters can be hosted. Use 48 GB of RAM for each node. For minimal development, 18 GB of RAM might be sufficient. Storage: Use SSD storage for multicluster engine Operator. Management cluster: 250 GB. Registry: The storage needed depends on the number of releases, operators, and images that are hosted. An acceptable number might be 500 GB, preferably separated from the disk that hosts the hosted cluster. Web server: The storage needed depends on the number of ISOs and images that are hosted. An acceptable number might be 500 GB. Production: For a production environment, separate the management cluster, the registry, and the web server on different disks. This example illustrates a possible configuration for production: Registry: 2 TB Management cluster: 500 GB Web server: 2 TB 6.3.3. Extracting the release image digest You can extract the OpenShift Container Platform release image digest by using the tagged image. Procedure Obtain the image digest by running the following command: USD oc adm release info <tagged_openshift_release_image> | grep "Pull From" Replace <tagged_openshift_release_image> with the tagged image for the supported OpenShift Container Platform version, for example, quay.io/openshift-release-dev/ocp-release:4.14.0-x8_64 . Example output 6.3.4. Configuring the hypervisor for a disconnected installation of hosted control planes The following information applies to virtual machine environments only. Procedure To deploy a virtual management cluster, access the required packages by entering the following command: USD sudo dnf install dnsmasq radvd vim golang podman bind-utils \ net-tools httpd-tools tree htop strace tmux -y Enable and start the Podman service by entering the following command: USD systemctl enable --now podman To use kcli to deploy the management cluster and other virtual components, install and configure the hypervisor by entering the following commands: USD sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm USD sudo usermod -aG qemu,libvirt USD(id -un) USD sudo newgrp libvirt USD sudo systemctl enable --now libvirtd USD sudo dnf -y copr enable karmab/kcli USD sudo dnf -y install kcli USD sudo kcli create pool -p /var/lib/libvirt/images default USD kcli create host kvm -H 127.0.0.1 local USD sudo setfacl -m u:USD(id -un):rwx /var/lib/libvirt/images USD kcli create network -c 192.168.122.0/24 default Enable the network manager dispatcher to ensure that virtual machines can resolve the required domains, routes, and registries. To enable the network manager dispatcher, in the /etc/NetworkManager/dispatcher.d/ directory, create a script named forcedns that contains the following content: #!/bin/bash export IP="192.168.126.1" 1 export BASE_RESOLV_CONF="/run/NetworkManager/resolv.conf" if ! [[ `grep -q "USDIP" /etc/resolv.conf` ]]; then export TMP_FILE=USD(mktemp /etc/forcedns_resolv.conf.XXXXXX) cp USDBASE_RESOLV_CONF USDTMP_FILE chmod --reference=USDBASE_RESOLV_CONF USDTMP_FILE sed -i -e "s/dns.base.domain.name//" \ -e "s/search /& dns.base.domain.name /" \ -e "0,/nameserver/s/nameserver/& USDIP\n&/" USDTMP_FILE 2 mv USDTMP_FILE /etc/resolv.conf fi echo "ok" 1 Modify the IP variable to point to the IP address of the hypervisor interface that hosts the OpenShift Container Platform management cluster. 2 Replace dns.base.domain.name with the DNS base domain name. After you create the file, add permissions by entering the following command: USD chmod 755 /etc/NetworkManager/dispatcher.d/forcedns Run the script and verify that the output returns ok . Configure ksushy to simulate baseboard management controllers (BMCs) for the virtual machines. Enter the following commands: USD sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y USD kcli create sushy-service --ssl --ipv6 --port 9000 USD sudo systemctl daemon-reload USD systemctl enable --now ksushy Test whether the service is correctly functioning by entering the following command: USD systemctl status ksushy If you are working in a development environment, configure the hypervisor system to allow various types of connections through different virtual networks within the environment. Note If you are working in a production environment, you must establish proper rules for the firewalld service and configure SELinux policies to maintain a secure environment. For SELinux, enter the following command: USD sed -i s/^SELINUX=.*USD/SELINUX=permissive/ /etc/selinux/config; \ setenforce 0 For firewalld , enter the following command: USD systemctl disable --now firewalld For libvirtd , enter the following commands: USD systemctl restart libvirtd USD systemctl enable --now libvirtd 6.3.5. DNS configurations on bare metal The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain> that points to destination where the API Server can be reached. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. Example DNS configuration api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example. Example DNS configuration for an IPv6 network api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10 If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. Example DNS configuration for a dual stack network host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9] 6.3.6. Deploying a registry for hosted control planes in a disconnected environment For development environments, deploy a small, self-hosted registry by using a Podman container. For production environments, deploy an enterprise-hosted registry, such as Red Hat Quay, Nexus, or Artifactory. Procedure To deploy a small registry by using Podman, complete the following steps: As a privileged user, access the USD{HOME} directory and create the following script: #!/usr/bin/env bash set -euo pipefail PRIMARY_NIC=USD(ls -1 /sys/class/net | grep -v podman | head -1) export PATH=/root/bin:USDPATH export PULL_SECRET="/root/baremetal/hub/openshift_pull.json" 1 if [[ ! -f USDPULL_SECRET ]];then echo "Pull Secret not found, exiting..." exit 1 fi dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel export IP=USD(ip -o addr show USDPRIMARY_NIC | head -1 | awk '{print USD4}' | cut -d'/' -f1) REGISTRY_NAME=registry.USD(hostname --long) REGISTRY_USER=dummy REGISTRY_PASSWORD=dummy KEY=USD(echo -n USDREGISTRY_USER:USDREGISTRY_PASSWORD | base64) echo "{\"auths\": {\"USDREGISTRY_NAME:5000\": {\"auth\": \"USDKEY\", \"email\": \"[email protected]\"}}}" > /root/disconnected_pull.json mv USD{PULL_SECRET} /root/openshift_pull.json.old jq ".auths += {\"USDREGISTRY_NAME:5000\": {\"auth\": \"USDKEY\",\"email\": \"[email protected]\"}}" < /root/openshift_pull.json.old > USDPULL_SECRET mkdir -p /opt/registry/{auth,certs,data,conf} cat <<EOF > /opt/registry/conf/config.yml version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 compatibility: schema1: enabled: true EOF openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj "/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=USDREGISTRY_NAME" -addext "subjectAltName=DNS:USDREGISTRY_NAME" cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract htpasswd -bBc /opt/registry/auth/htpasswd USDREGISTRY_USER USDREGISTRY_PASSWORD podman create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest [ "USD?" == "0" ] || !! systemctl enable --now registry 1 Replace the location of the PULL_SECRET with the appropriate location for your setup. Name the script file registry.sh and save it. When you run the script, it pulls in the following information: The registry name, based on the hypervisor hostname The necessary credentials and user access details Adjust permissions by adding the execution flag as follows: USD chmod u+x USD{HOME}/registry.sh To run the script without any parameters, enter the following command: USD USD{HOME}/registry.sh The script starts the server. The script uses a systemd service for management purposes. If you need to manage the script, you can use the following commands: USD systemctl status USD systemctl start USD systemctl stop The root folder for the registry is in the /opt/registry directory and contains the following subdirectories: certs contains the TLS certificates. auth contains the credentials. data contains the registry images. conf contains the registry configuration. 6.3.7. Setting up a management cluster for hosted control planes in a disconnected environment To set up an OpenShift Container Platform management cluster, you can use dev-scripts, or if you are based on virtual machines, you can use the kcli tool. The following instructions are specific to the kcli tool. Procedure Ensure that the right networks are prepared for use in the hypervisor. The networks will host both the management and hosted clusters. Enter the following kcli command: USD kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false \ -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual where: -c specifies the CIDR for the network. -P dhcp=false configures the network to disable the DHCP, which is handled by the dnsmasq that you configured. -P dns=false configures the network to disable the DNS, which is also handled by the dnsmasq that you configured. --domain sets the domain to search. dns.base.domain.name is the DNS base domain name. dual is the name of the network that you are creating. After the network is created, review the following output: [root@hypershiftbm ~]# kcli list network Listing Networks... +---------+--------+---------------------+-------+------------------+------+ | Network | Type | Cidr | Dhcp | Domain | Mode | +---------+--------+---------------------+-------+------------------+------+ | default | routed | 192.168.122.0/24 | True | default | nat | | ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | | ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | | ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | +---------+--------+---------------------+-------+------------------+------+ [root@hypershiftbm ~]# kcli info network ipv6 Providing information about network ipv6... cidr: 2620:52:0:1306::/64 dhcp: false domain: dns.base.domain.name mode: nat plan: kvirt type: routed Ensure that the pull secret and kcli plan files are in place so that you can deploy the OpenShift Container Platform management cluster: Confirm that the pull secret is in the same folder as the kcli plan, and that the pull secret file is named openshift_pull.json . Add the kcli plan, which contains the OpenShift Container Platform definition, in the mgmt-compact-hub-dual.yaml file. Ensure that you update the file contents to match your environment: plan: hub-dual force: true version: stable tag: "<4.x.y>-x86_64" 1 cluster: "hub-dual" dualstack: true domain: dns.base.domain.name api_ip: 192.168.126.10 ingress_ip: 192.168.126.11 service_networks: - 172.30.0.0/16 - fd02::/112 cluster_networks: - 10.132.0.0/14 - fd01::/48 disconnected_url: registry.dns.base.domain.name:5000 disconnected_update: true disconnected_user: dummy disconnected_password: dummy disconnected_operators_version: v4.14 disconnected_operators: - name: metallb-operator - name: lvms-operator channels: - name: stable-4.14 disconnected_extra_images: - quay.io/user-name/trbsht:latest - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 dualstack: true disk_size: 200 extra_disks: [200] memory: 48000 numcpus: 16 ctlplanes: 3 workers: 0 manifests: extra-manifests metal3: true network: dual users_dev: developer users_devpassword: developer users_admin: admin users_adminpassword: admin metallb_pool: dual-virtual-network metallb_ranges: - 192.168.126.150-192.168.126.190 metallb_autoassign: true apps: - users - lvms-operator - metallb-operator vmrules: - hub-bootstrap: nets: - name: ipv6 mac: aa:aa:aa:aa:10:07 - hub-ctlplane-0: nets: - name: ipv6 mac: aa:aa:aa:aa:10:01 - hub-ctlplane-1: nets: - name: ipv6 mac: aa:aa:aa:aa:10:02 - hub-ctlplane-2: nets: - name: ipv6 mac: aa:aa:aa:aa:10:03 1 Replace <4.x.y> with the supported OpenShift Container Platform version you want to use. To provision the management cluster, enter the following command: USD kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml steps , configure the web server. 6.3.8. Configuring the web server for hosted control planes in a disconnected environment You need to configure an additional web server to host the Red Hat Enterprise Linux CoreOS (RHCOS) images that are associated with the OpenShift Container Platform release that you are deploying as a hosted cluster. Procedure To configure the web server, complete the following steps: Extract the openshift-install binary from the OpenShift Container Platform release that you want to use by entering the following command: USD oc adm -a USD{LOCAL_SECRET_JSON} release extract --command=openshift-install \ "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Run the following script. The script creates a folder in the /opt/srv directory. The folder contains the RHCOS images to provision the worker nodes. #!/bin/bash WEBSRV_FOLDER=/opt/srv ROOTFS_IMG_URL="USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')" 1 LIVE_ISO_URL="USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')" 2 mkdir -p USD{WEBSRV_FOLDER}/images curl -Lk USD{ROOTFS_IMG_URL} -o USD{WEBSRV_FOLDER}/images/USD{ROOTFS_IMG_URL##*/} curl -Lk USD{LIVE_ISO_URL} -o USD{WEBSRV_FOLDER}/images/USD{LIVE_ISO_URL##*/} chmod -R 755 USD{WEBSRV_FOLDER}/* ## Run Webserver podman ps --noheading | grep -q websrv-ai if [[ USD? == 0 ]];then echo "Launching Registry pod..." /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 fi 1 You can find the ROOTFS_IMG_URL value on the OpenShift CI Release page. 2 You can find the LIVE_ISO_URL value on the OpenShift CI Release page. After the download is completed, a container runs to host the images on a web server. The container uses a variation of the official HTTPd image, which also enables it to work with IPv6 networks. 6.3.9. Configuring image mirroring for hosted control planes in a disconnected environment Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io , and storing them in your private registry. In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information: The OpenShift Container Platform versions to mirror. The versions are in quay.io . The additional Operators to mirror. Select packages individually. The extra images that you want to add to the repository. Prerequisites Ensure that the registry server is running before you start the mirroring process. Procedure To configure image mirroring, complete the following steps: Ensure that your USD{HOME}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-{product-version} minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5 1 2 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 3 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. 4 Images specified in the additionalImages field are examples only and are not strictly needed. 5 For deployments that use the KubeVirt provider, include this line. Start the mirroring process by entering the following command: USD oc-mirror --v2 --config imagesetconfig.yaml \ --workspace file://mirror-file docker://<registry> After the mirroring process is finished, you have a new folder named mirror-file , which contains the ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), and the catalog sources to apply on the hosted cluster. Mirror the nightly or CI versions of OpenShift Container Platform by configuring the imagesetconfig.yaml file as follows: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2 # ... 1 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 2 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. If you have a partially disconnected environment, mirror the images from the image set configuration to a registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --workspace file://<file_path> docker://<mirror_registry_url> --v2 For more information, see "Mirroring an image set in a partially disconnected environment". If you have a fully disconnected environment, perform the following steps: Mirror the images from the specified image set configuration to the disk by entering the following command: USD oc mirror -c imagesetconfig.yaml file://<file_path> --v2 For more information, see "Mirroring an image set in a fully disconnected environment". Process the image set file on the disk and mirror the contents to a target mirror registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --from file://<file_path> docker://<mirror_registry_url> --v2 Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks . Additional resources Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment 6.3.10. Applying objects in the management cluster After the mirroring process is complete, you need to apply two objects in the management cluster: ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS) Catalog sources When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/ . The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY , you need to apply the newly generated catalog sources. The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image. Procedure To check the new sources, run the following command by using the new CatalogSource as a source: USD oc get packagemanifest To apply the artifacts, complete the following steps: Create the ICSP or IDMS artifacts by entering the following command: USD oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml Wait for the nodes to become ready, and then enter the following command: USD oc apply -f catalogSource-XXXXXXXX-index.yaml Mirror the OLM catalogs and configure the hosted cluster to point to the mirror. When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2" , where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs: hypershift.openshift.io/certified-operators-catalog-image hypershift.openshift.io/community-operators-catalog-image hypershift.openshift.io/redhat-marketplace-catalog-image hypershift.openshift.io/redhat-operators-catalog-image In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates. steps Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes . Additional resources Mirroring images for a disconnected installation using the oc-mirror plugin . 6.3.11. Deploying multicluster engine Operator for a disconnected installation of hosted control planes The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it: About cluster lifecycle with multicluster engine operator Installing and upgrading multicluster engine operator 6.3.11.1. Deploying AgentServiceConfig resources The AgentServiceConfig custom resource is an essential component of the Assisted Service add-on that is part of multicluster engine Operator. It is responsible for bare metal cluster deployment. When the add-on is enabled, you deploy the AgentServiceConfig resource to configure the add-on. In addition to configuring the AgentServiceConfig resource, you need to include additional config maps to ensure that multicluster engine Operator functions properly in a disconnected environment. Procedure Configure the custom registries by adding the following config map, which contains the disconnected details to customize the deployment: apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.dns.base.domain.name:5000/openshift4" 1 [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ... 1 Replace dns.base.domain.name with the DNS base domain name. The object contains two fields: Custom CAs: This field contains the Certificate Authorities (CAs) that are loaded into the various processes of the deployment. Registries: The Registries.conf field contains information about images and namespaces that need to be consumed from a mirror registry rather than the original source registry. Configure the Assisted Service by adding the AssistedServiceConfig object, as shown in the following example: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config 1 name: agent namespace: multicluster-engine spec: mirrorRegistryRef: name: custom-registries 2 databaseStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 20Gi osImages: 3 - cpuArchitecture: x86_64 4 openshiftVersion: "4.14" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img 5 url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso version: 414.92.202308281054-0 - cpuArchitecture: x86_64 openshiftVersion: "4.15" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso version: 415.92.202403270524-0 1 The metadata.annotations["unsupported.agent-install.openshift.io/assisted-service-configmap"] annotation references the config map name that the Operator consumes to customize behavior. 2 The spec.mirrorRegistryRef.name annotation points to the config map that contains disconnected registry information that the Assisted Service Operator consumes. This config map adds those resources during the deployment process. 3 The spec.osImages field contains different versions available for deployment by this Operator. This field is mandatory. This example assumes that you already downloaded the RootFS and LiveISO files. 4 Add a cpuArchitecture subsection for every OpenShift Container Platform release that you want to deploy. In this example, cpuArchitecture subsections are included for 4.14 and 4.15. 5 In the rootFSUrl and url fields, replace dns.base.domain.name with the DNS base domain name. Deploy all of the objects by concatenating them into a single file and applying them to the management cluster. To do so, enter the following command: USD oc apply -f agentServiceConfig.yaml The command triggers two pods. Example output assisted-image-service-0 1/1 Running 2 11d 1 assisted-service-668b49548-9m7xw 2/2 Running 5 11d 2 1 The assisted-image-service pod is responsible for creating the Red Hat Enterprise Linux CoreOS (RHCOS) boot image template, which is customized for each cluster that you deploy. 2 The assisted-service refers to the Operator. steps Configure TLS certificates by completing the steps in Configuring TLS certificates for a disconnected installation of hosted control planes . 6.3.12. Configuring TLS certificates for a disconnected installation of hosted control planes To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. 6.3.12.1. Adding the registry CA to the management cluster To add the registry CA to the management cluster, complete the following steps. Procedure Create a config map that resembles the following example: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- 1 Specify the name of the config map. 2 Specify the namespace for the config map. 3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000 . 4 Ensure that the data in the config map is defined by using | only instead of other methods, such as | - . If you use other methods, issues can occur when the pod reads the certificates. Patch the cluster-wide object, image.config.openshift.io to include the following specification: spec: additionalTrustedCA: - name: registry-config As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments. The process to patch the object might take several minutes to be completed. 6.3.12.2. Adding the registry CA to the worker nodes for the hosted cluster In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes. Procedure In the hc.spec.additionalTrustBundle file, add the following specification: spec: additionalTrustBundle: - name: user-ca-bundle 1 1 The user-ca-bundle entry is a config map that you create in the step. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example: apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 1 Specify the namespace where the HostedCluster object is created. 6.3.13. Creating a hosted cluster on bare metal A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. 6.3.13.1. Deploying hosted cluster objects Typically, the HyperShift Operator creates the HostedControlPlane namespace. However, in this case, you want to include all the objects before the HyperShift Operator begins to reconcile the HostedCluster object. Then, when the Operator starts the reconciliation process, it can find all of the objects in place. Procedure Create a YAML file with the following information about the namespaces: --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace>-<hosted_cluster_name> 1 spec: {} status: {} --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace> 2 spec: {} status: {} 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. Create a YAML file with the following information about the config maps and secrets to include in the HostedCluster deployment: --- apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxx kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-pull-secret 2 namespace: <hosted_cluster_namespace> 3 --- apiVersion: v1 kind: Secret metadata: name: sshkey-cluster-<hosted_cluster_name> 4 namespace: <hosted_cluster_namespace> 5 stringData: id_rsa.pub: ssh-rsa xxxxxxxxx --- apiVersion: v1 data: key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-etcd-encryption-key 6 namespace: <hosted_cluster_namespace> 7 type: Opaque 1 3 5 7 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 2 4 6 Replace <hosted_cluster_name> with your hosted cluster. Create a YAML file that contains the RBAC roles so that Assisted Service agents can be in the same HostedControlPlane namespace as the hosted control plane and still be managed by the cluster API: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: capi-provider-role namespace: <hosted_cluster_namespace>-<hosted_cluster_name> 1 2 rules: - apiGroups: - agent-install.openshift.io resources: - agents verbs: - '*' 1 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 2 Replace <hosted_cluster_name> with your hosted cluster. Create a YAML file with information about the HostedCluster object, replacing values as necessary: apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: additionalTrustBundle: name: "user-ca-bundle" olmCatalogPlacement: guest imageContentSources: 3 - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: - registry.<dns.base.domain.name>:5000/openshift/release 4 - source: quay.io/openshift-release-dev/ocp-release mirrors: - registry.<dns.base.domain.name>:5000/openshift/release-images 5 - mirrors: ... ... autoscaling: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns.base.domain.name> 6 etcd: managed: storage: persistentVolume: size: 8Gi restoreSnapshotURL: null type: PersistentVolume managementType: Managed fips: false networking: clusterNetwork: - cidr: 10.132.0.0/14 - cidr: fd01::/48 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 - cidr: fd02::/112 platform: agent: agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name> 7 8 type: Agent pullSecret: name: <hosted_cluster_name>-pull-secret 9 release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:<4.x.y>-x86_64 10 11 secretEncryption: aescbc: activeKey: name: <hosted_cluster_name>-etcd-encryption-key 12 type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: OIDC servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route sshKey: name: sshkey-cluster-<hosted_cluster_name> 13 status: controlPlaneEndpoint: host: "" port: 0 1 7 9 12 13 Replace <hosted_cluster_name> with your hosted cluster. 2 8 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The imageContentSources section contains mirror references for user workloads within the hosted cluster. 4 5 6 10 Replace <dns.base.domain.name> with the DNS base domain name. 11 Replace <4.x.y> with the supported OpenShift Container Platform version you want to use. Add an annotation in the HostedCluster object that points to the HyperShift Operator release in the OpenShift Container Platform release: Obtain the image payload by entering the following command: USD oc adm release info \ registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:<4.x.y>-x86_64 \ | grep hypershift where <dns.base.domain.name> is the DNS base domain name and <4.x.y> is the supported OpenShift Container Platform version you want to use. Example output hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 By using the OpenShift Container Platform Images namespace, check the digest by entering the following command: podman pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 where <dns.base.domain.name> is the DNS base domain name. Example output podman pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8... Getting image source signatures Copying blob d8190195889e skipped: already exists Copying blob c71d2589fba7 skipped: already exists Copying blob d4dc6e74b6ce skipped: already exists Copying blob 97da74cc6d8f skipped: already exists Copying blob b70007a560c9 done Copying config 3a62961e6e done Writing manifest to image destination Storing signatures 3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde The release image that is set in the HostedCluster object must use the digest rather than the tag; for example, quay.io/openshift-release-dev/ocp-release@sha256:e3ba11bd1e5e8ea5a0b36a75791c90f29afb0fdbe4125be4e48f69c76a5c47a0 . Create all of the objects that you defined in the YAML files by concatenating them into a file and applying them against the management cluster. To do so, enter the following command: USD oc apply -f 01-4.14-hosted_cluster-nodeport.yaml Example output NAME READY STATUS RESTARTS AGE capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s dns-operator-6874b577f-9tc6b 1/1 Running 0 94s etcd-0 3/3 Running 0 3m39s hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s machine-approver-846c69f56-jxvfr 1/1 Running 0 92s oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s olm-operator-767f9584c-4lcl2 2/2 Running 0 93s openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s When the hosted cluster is available, the output looks like the following example. Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available 6.3.13.2. Creating a NodePool object for the hosted cluster A NodePool is a scalable set of worker nodes that is associated with a hosted cluster. NodePool machine architectures remain consistent within a specific pool and are independent of the machine architecture of the control plane. Procedure Create a YAML file with the following information about the NodePool object, replacing values as necessary: apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <hosted_cluster_name> \ 1 namespace: <hosted_cluster_namespace> \ 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false \ 3 upgradeType: InPlace \ 4 nodeDrainTimeout: 0s platform: type: Agent release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \ 5 replicas: 2 6 status: replicas: 2 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The autoRepair field is set to false because the node will not be re-created if it is removed. 4 The upgradeType is set to InPlace , which indicates that the same bare metal node is reused during an upgrade. 5 All of the nodes included in this NodePool are based on the following OpenShift Container Platform version: 4.x.y-x86_64 . Replace the <dns.base.domain.name> value with your DNS base domain name and the 4.x.y value with the supported OpenShift Container Platform version you want to use. 6 You can set the replicas value to 2 to create two node pool replicas in your hosted cluster. Create the NodePool object by entering the following command: USD oc apply -f 02-nodepool.yaml Example output NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted-dual hosted 0 False False 4.x.y-x86_64 6.3.13.3. Creating an InfraEnv resource for the hosted cluster The InfraEnv resource is an Assisted Service object that includes essential details, such as the pullSecretRef and the sshAuthorizedKey . Those details are used to create the Red Hat Enterprise Linux CoreOS (RHCOS) boot image that is customized for the hosted cluster. You can host more than one InfraEnv resource, and each one can adopt certain types of hosts. For example, you might want to divide your server farm between a host that has greater RAM capacity. Procedure Create a YAML file with the following information about the InfraEnv resource, replacing values as necessary: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted-cluster-namespace>-<hosted_cluster_name> 1 2 spec: pullSecretRef: 3 name: pull-secret sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= 4 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The pullSecretRef refers to the config map reference in the same namespace as the InfraEnv , where the pull secret is used. 4 The sshAuthorizedKey represents the SSH public key that is placed in the boot image. The SSH key allows access to the worker nodes as the core user. Create the InfraEnv resource by entering the following command: USD oc apply -f 03-infraenv.yaml Example output 6.3.13.4. Creating worker nodes for the hosted cluster If you are working on a bare metal platform, creating worker nodes is crucial to ensure that the details in the BareMetalHost are correctly configured. If you are working with virtual machines, you can complete the following steps to create empty worker nodes for the Metal3 Operator to consume. To do so, you use the kcli tool. Procedure If this is not your first attempt to create worker nodes, you must first delete your setup. To do so, delete the plan by entering the following command: USD kcli delete plan <hosted_cluster_name> 1 1 Replace <hosted_cluster_name> with the name of your hosted cluster. When you are prompted to confirm whether you want to delete the plan, type y . Confirm that you see a message stating that the plan was deleted. Create the virtual machines by entering the following commands: Enter the following command to create the first virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:01\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 \ -P name=<hosted_cluster_name>-worker0 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the following command to create the second virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:02\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 -P name=<hosted_cluster_name>-worker1 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the following command to create the third virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:03\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 -P name=<hosted_cluster_name>-worker2 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the restart ksushy command to restart the ksushy tool to ensure that the tool detects the VMs that you added: USD systemctl restart ksushy Example output +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | Name | Status | Ip | Source | Plan | Profile | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | hosted-worker0 | down | | | hosted-dual | kvirt | | hosted-worker1 | down | | | hosted-dual | kvirt | | hosted-worker2 | down | | | hosted-dual | kvirt | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ 6.3.13.5. Creating bare metal hosts for the hosted cluster A bare metal host is an openshift-machine-api object that encompasses physical and logical details so that it can be identified by a Metal3 Operator. Those details are associated with other Assisted Service objects, known as agents . Prerequisites Before you create the bare metal host and destination nodes, you must have the destination machines ready. Procedure To create a bare metal host, complete the following steps: Create a YAML file with the following information: Because you have at least one secret that holds the bare metal host credentials, you need to create at least two objects for each worker node. apiVersion: v1 kind: Secret metadata: name: <hosted_cluster_name>-worker0-bmc-secret \ 1 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \ 2 data: password: YWRtaW4= \ 3 username: YWRtaW4= \ 4 type: Opaque # ... apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <hosted_cluster_name>-worker0 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \ 5 labels: infraenvs.agent-install.openshift.io: <hosted_cluster_name> \ 6 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \ 7 spec: automatedCleaningMode: disabled \ 8 bmc: disableCertificateVerification: true \ 9 address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \ 10 credentialsName: <hosted_cluster_name>-worker0-bmc-secret \ 11 bootMACAddress: aa:aa:aa:aa:02:11 \ 12 online: true 13 1 Replace <hosted_cluster_name> with your hosted cluster. 2 5 Replace <hosted_cluster_name> with your hosted cluster. Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 Specify the password of the baseboard management controller (BMC) in Base64 format. 4 Specify the user name of the BMC in Base64 format. 6 Replace <hosted_cluster_name> with your hosted cluster. The infraenvs.agent-install.openshift.io field serves as the link between the Assisted Installer and the BareMetalHost objects. 7 Replace <hosted_cluster_name> with your hosted cluster. The bmac.agent-install.openshift.io/hostname field represents the node name that is adopted during deployment. 8 The automatedCleaningMode field prevents the node from being erased by the Metal3 Operator. 9 The disableCertificateVerification field is set to true to bypass certificate validation from the client. 10 Replace <hosted_cluster_name> with your hosted cluster. The address field denotes the BMC address of the worker node. 11 Replace <hosted_cluster_name> with your hosted cluster. The credentialsName field points to the secret where the user and password credentials are stored. 12 The bootMACAddress field indicates the interface MAC address that the node starts from. 13 The online field defines the state of the node after the BareMetalHost object is created. Deploy the BareMetalHost object by entering the following command: USD oc apply -f 04-bmh.yaml During the process, you can view the following output: This output indicates that the process is trying to reach the nodes: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 registering true 2s clusters-hosted hosted-worker1 registering true 2s clusters-hosted hosted-worker2 registering true 2s This output indicates that the nodes are starting: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioning true 16s clusters-hosted hosted-worker1 provisioning true 16s clusters-hosted hosted-worker2 provisioning true 16s This output indicates that the nodes started successfully: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioned true 67s clusters-hosted hosted-worker1 provisioned true 67s clusters-hosted hosted-worker2 provisioned true 67s After the nodes start, notice the agents in the namespace, as shown in this example: Example output NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign The agents represent nodes that are available for installation. To assign the nodes to a hosted cluster, scale up the node pool. 6.3.13.6. Scaling up the node pool After you create the bare metal hosts, their statuses change from Registering to Provisioning to Provisioned . The nodes start with the LiveISO of the agent and a default pod that is named agent . That agent is responsible for receiving instructions from the Assisted Service Operator to install the OpenShift Container Platform payload. Procedure To scale up the node pool, enter the following command: USD oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> \ --replicas 3 where: <hosted_cluster_namespace> is the name of the hosted cluster namespace. <hosted_cluster_name> is the name of the hosted cluster. After the scaling process is complete, notice that the agents are assigned to a hosted cluster: Example output NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign Also notice that the node pool replicas are set: Example output NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted hosted 3 False False <4.x.y>-x86_64 Minimum availability requires 3 replicas, current 0 available Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use. Wait until the nodes join the cluster. During the process, the agents provide updates on their stage and status. 6.4. Deploying hosted control planes on IBM Z in a disconnected environment Hosted control planes deployments in disconnected environments function differently than in a standalone OpenShift Container Platform. Hosted control planes involves two distinct environments: Control plane: Located in the management cluster, where the hosted control planes pods are run and managed by the Control Plane Operator. Data plane: Located in the workers of the hosted cluster, where the workload and a few other pods run, managed by the Hosted Cluster Config Operator. The ImageContentSourcePolicy (ICSP) custom resource for the data plane is managed through the ImageContentSources API in the hosted cluster manifest. For the control plane, ICSP objects are managed in the management cluster. These objects are parsed by the HyperShift Operator and are shared as registry-overrides entries with the Control Plane Operator. These entries are injected into any one of the available deployments in the hosted control planes namespace as an argument. To work with disconnected registries in the hosted control planes, you must first create the appropriate ICSP in the management cluster. Then, to deploy disconnected workloads in the data plane, you need to add the entries that you want into the ImageContentSources field in the hosted cluster manifest. 6.4.1. Prerequisites to deploy hosted control planes on IBM Z in a disconnected environment A mirror registry. For more information, see "Creating a mirror registry with mirror registry for Red Hat OpenShift". A mirrored image for a disconnected installation. For more information, see "Mirroring images for a disconnected installation using the oc-mirror plugin". Additional resources Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring images for a disconnected installation using the oc-mirror plugin 6.4.2. Adding credentials and the registry certificate authority to the management cluster To pull the mirror registry images from the management cluster, you must first add credentials and the certificate authority of the mirror registry to the management cluster. Use the following procedure: Procedure Create a ConfigMap with the certificate of the mirror registry by running the following command: USD oc apply -f registry-config.yaml Example registry-config.yaml file apiVersion: v1 kind: ConfigMap metadata: name: registry-config namespace: openshift-config data: <mirror_registry>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- Patch the image.config.openshift.io cluster-wide object to include the following entries: spec: additionalTrustedCA: - name: registry-config Update the management cluster pull secret to add the credentials of the mirror registry. Fetch the pull secret from the cluster in a JSON format by running the following command: USD oc get secret/pull-secret -n openshift-config -o json \ | jq -r '.data.".dockerconfigjson"' \ | base64 -d > authfile Edit the fetched secret JSON file to include a section with the credentials of the certificate authority: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Provide the name of the mirror registry. 2 Provide the credentials for the mirror registry to allow fetch of images. Update the pull secret on the cluster by running the following command: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=authfile 6.4.3. Update the registry certificate authority in the AgentServiceConfig resource with the mirror registry When you use a mirror registry for images, agents need to trust the registry's certificate to securely pull images. You can add the certificate authority of the mirror registry to the AgentServiceConfig custom resource by creating a ConfigMap . Prerequisites You must have installed multicluster engine for Kubernetes Operator. Procedure In the same namespace where you installed multicluster engine Operator, create a ConfigMap resource with the mirror registry details. This ConfigMap resource ensures that you grant the hosted cluster workers the capability to retrieve images from the mirror registry. Example ConfigMap file apiVersion: v1 kind: ConfigMap metadata: name: mirror-config namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | [[registry]] location = "registry.stage.redhat.io" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>" insecure = false [[registry]] location = "registry.redhat.io/multicluster-engine" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>/multicluster-engine" 1 insecure = false 1 Where: <mirror_registry> is the name of the mirror registry. Patch the AgentServiceConfig resource to include the ConfigMap resource that you created. If the AgentServiceConfig resource is not present, create the AgentServiceConfig resource with the following content embedded into it: spec: mirrorRegistryRef: name: mirror-config 6.4.4. Adding the registry certificate authority to the hosted cluster When you are deploying hosted control planes on IBM Z in a disconnected environment, include the additional-trust-bundle and image-content-sources resources. Those resources allow the hosted cluster to inject the certificate authority into the data plane workers so that the images are pulled from the registry. Create the icsp.yaml file with the image-content-sources information. The image-content-sources information is available in the ImageContentSourcePolicy YAML file that is generated after you mirror the images by using oc-mirror . Example ImageContentSourcePolicy file # cat icsp.yaml - mirrors: - <mirror_registry>/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror_registry>/openshift/release-images source: quay.io/openshift-release-dev/ocp-release Create a hosted cluster and provide the additional-trust-bundle certificate to update the compute nodes with the certificates as in the following example: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ --etcd-storage-class=<etcd_storage_class> \ 5 --ssh-key <path_to_ssh_public_key> \ 6 --namespace <hosted_cluster_namespace> \ 7 --control-plane-availability-policy SingleReplica \ --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 8 --additional-trust-bundle <path for cert> \ 9 --image-content-sources icsp.yaml 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace the path to your pull secret, for example, /user/name/pullsecret . 3 Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted . 4 Replace the name with your base domain, for example, example.com . 5 Replace the etcd storage class name, for example, lvm-storageclass . 6 Replace the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 7 8 Replace with the supported OpenShift Container Platform version that you want to use, for example, 4.17.0-multi . 9 Replace the path to Certificate Authority of mirror registry. 6.5. Monitoring user workload in a disconnected environment The hypershift-addon managed cluster add-on enables the --enable-uwm-telemetry-remote-write option in the HyperShift Operator. By enabling that option, you ensure that user workload monitoring is enabled and that it can remotely write telemetry metrics from control planes. 6.5.1. Resolving user workload monitoring issues If you installed multicluster engine Operator on OpenShift Container Platform clusters that are not connected to the internet, when you try to run the user workload monitoring feature of the HyperShift Operator by entering the following command, the feature fails with an error: USD oc get events -n hypershift Example error LAST SEEN TYPE REASON OBJECT MESSAGE 4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret "telemeter-client" not found To resolve the error, you must disable the user workload monitoring option by creating a config map in the local-cluster namespace. You can create the config map either before or after you enable the add-on. The add-on agent reconfigures the HyperShift Operator. Procedure Create the following config map: kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: "" installFlagsToRemove: "--enable-uwm-telemetry-remote-write" Apply the config map by running the following command: USD oc apply -f <filename>.yaml 6.5.2. Verifying the status of the hosted control plane feature The hosted control plane feature is enabled by default. Procedure If the feature is disabled and you want to enable it, enter the following command. Replace <multiclusterengine> with the name of your multicluster engine Operator instance: USD oc patch mce <multiclusterengine> --type=merge -p \ '{"spec":{"overrides":{"components":[{"name":"hypershift","enabled": true}]}}}' When you enable the feature, the hypershift-addon managed cluster add-on is installed in the local-cluster managed cluster, and the add-on agent installs the HyperShift Operator on the multicluster engine Operator hub cluster. Confirm that the hypershift-addon managed cluster add-on is installed by entering the following command: USD oc get managedclusteraddons -n local-cluster hypershift-addon Example output To avoid a timeout during this process, enter the following commands: USD oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon \ -n local-cluster --timeout=5m USD oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon \ -n local-cluster --timeout=5m When the process is complete, the hypershift-addon managed cluster add-on and the HyperShift Operator are installed, and the local-cluster managed cluster is available to host and manage hosted clusters. 6.5.3. Configuring the hypershift-addon managed cluster add-on to run on an infrastructure node By default, no node placement preference is specified for the hypershift-addon managed cluster add-on. Consider running the add-ons on the infrastructure nodes, because by doing so, you can prevent incurring billing costs against subscription counts and separate maintenance and management tasks. Procedure Log in to the hub cluster. Open the hypershift-addon-deploy-config add-on deployment configuration specification for editing by entering the following command: USD oc edit addondeploymentconfig hypershift-addon-deploy-config \ -n multicluster-engine Add the nodePlacement field to the specification, as shown in the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: nodePlacement: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists Save the changes. The hypershift-addon managed cluster add-on is deployed on an infrastructure node for new and existing managed clusters. | [
"apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-{product-version} minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5",
"oc-mirror --v2 --config imagesetconfig.yaml --workspace file://mirror-file docker://<registry>",
"apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2",
"oc mirror -c imagesetconfig.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2",
"oc mirror -c imagesetconfig.yaml file://<file_path> --v2",
"oc mirror -c imagesetconfig.yaml --from file://<file_path> docker://<mirror_registry_url> --v2",
"oc get packagemanifest",
"oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml",
"oc apply -f catalogSource-XXXXXXXX-index.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"spec: additionalTrustedCA: - name: registry-config",
"spec: additionalTrustBundle: - name: user-ca-bundle 1",
"apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1",
"hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <node_pool_replica_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --etcd-storage-class=<etcd_storage_class> 6",
"oc -n clusters-<hosted-cluster-name> get pods",
"NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s",
"oc get --namespace clusters hostedclusters",
"NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available",
"*.apps.mgmt-cluster.example.com",
"*.apps.guest.apps.mgmt-cluster.example.com",
"oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ \"op\": \"add\", \"path\": \"/spec/routeAdmission\", \"value\": {wildcardPolicy: \"WildcardsAllowed\"}}]'",
"hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --base-domain <basedomain> 6",
"oc get --namespace clusters hostedclusters",
"NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available",
"hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig",
"oc --kubeconfig <hosted_cluster_name>-kubeconfig get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get \"https://console-openshift-console.apps.example.hypershift.lab\": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The \"default\" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)",
"oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}'",
"oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}'",
"apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer",
"oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'",
"192.168.20.30",
"*.apps.<hosted_cluster_name\\>.<base_domain\\>.",
"dig +short test.apps.example.hypershift.lab 192.168.20.30",
"oc get --namespace clusters hostedclusters",
"NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available",
"export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig",
"watch \"oc get pod -n hypershift;echo;echo; oc get pod -n clusters-hosted-ipv4;echo;echo; oc get bmh -A;echo;echo; oc get agent -A;echo;echo; oc get infraenv -A;echo;echo; oc get hostedcluster -A;echo;echo; oc get nodepool -A;echo;echo;\"",
"oc get secret -n clusters-hosted-ipv4 admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > /root/hc_admin_kubeconfig.yaml",
"export KUBECONFIG=/root/hc_admin_kubeconfig.yaml",
"watch \"oc get clusterversion,nodes,co\"",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- # -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4\" [[registry]] prefix = \"\" location = \"registry.redhat.io/rhacm2\" mirror-by-digest-only = true",
"oc adm release info <tagged_openshift_release_image> | grep \"Pull From\"",
"Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe",
"sudo dnf install dnsmasq radvd vim golang podman bind-utils net-tools httpd-tools tree htop strace tmux -y",
"systemctl enable --now podman",
"sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm",
"sudo usermod -aG qemu,libvirt USD(id -un)",
"sudo newgrp libvirt",
"sudo systemctl enable --now libvirtd",
"sudo dnf -y copr enable karmab/kcli",
"sudo dnf -y install kcli",
"sudo kcli create pool -p /var/lib/libvirt/images default",
"kcli create host kvm -H 127.0.0.1 local",
"sudo setfacl -m u:USD(id -un):rwx /var/lib/libvirt/images",
"kcli create network -c 192.168.122.0/24 default",
"#!/bin/bash export IP=\"192.168.126.1\" 1 export BASE_RESOLV_CONF=\"/run/NetworkManager/resolv.conf\" if ! [[ `grep -q \"USDIP\" /etc/resolv.conf` ]]; then export TMP_FILE=USD(mktemp /etc/forcedns_resolv.conf.XXXXXX) cp USDBASE_RESOLV_CONF USDTMP_FILE chmod --reference=USDBASE_RESOLV_CONF USDTMP_FILE sed -i -e \"s/dns.base.domain.name//\" -e \"s/search /& dns.base.domain.name /\" -e \"0,/nameserver/s/nameserver/& USDIP\\n&/\" USDTMP_FILE 2 mv USDTMP_FILE /etc/resolv.conf fi echo \"ok\"",
"chmod 755 /etc/NetworkManager/dispatcher.d/forcedns",
"sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y",
"kcli create sushy-service --ssl --ipv6 --port 9000",
"sudo systemctl daemon-reload",
"systemctl enable --now ksushy",
"systemctl status ksushy",
"sed -i s/^SELINUX=.*USD/SELINUX=permissive/ /etc/selinux/config; setenforce 0",
"systemctl disable --now firewalld",
"systemctl restart libvirtd",
"systemctl enable --now libvirtd",
"api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23",
"api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10",
"host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]",
"#!/usr/bin/env bash set -euo pipefail PRIMARY_NIC=USD(ls -1 /sys/class/net | grep -v podman | head -1) export PATH=/root/bin:USDPATH export PULL_SECRET=\"/root/baremetal/hub/openshift_pull.json\" 1 if [[ ! -f USDPULL_SECRET ]];then echo \"Pull Secret not found, exiting...\" exit 1 fi dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel export IP=USD(ip -o addr show USDPRIMARY_NIC | head -1 | awk '{print USD4}' | cut -d'/' -f1) REGISTRY_NAME=registry.USD(hostname --long) REGISTRY_USER=dummy REGISTRY_PASSWORD=dummy KEY=USD(echo -n USDREGISTRY_USER:USDREGISTRY_PASSWORD | base64) echo \"{\\\"auths\\\": {\\\"USDREGISTRY_NAME:5000\\\": {\\\"auth\\\": \\\"USDKEY\\\", \\\"email\\\": \\\"[email protected]\\\"}}}\" > /root/disconnected_pull.json mv USD{PULL_SECRET} /root/openshift_pull.json.old jq \".auths += {\\\"USDREGISTRY_NAME:5000\\\": {\\\"auth\\\": \\\"USDKEY\\\",\\\"email\\\": \\\"[email protected]\\\"}}\" < /root/openshift_pull.json.old > USDPULL_SECRET mkdir -p /opt/registry/{auth,certs,data,conf} cat <<EOF > /opt/registry/conf/config.yml version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 compatibility: schema1: enabled: true EOF openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj \"/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=USDREGISTRY_NAME\" -addext \"subjectAltName=DNS:USDREGISTRY_NAME\" cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract htpasswd -bBc /opt/registry/auth/htpasswd USDREGISTRY_USER USDREGISTRY_PASSWORD create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry\" -e \"REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest [ \"USD?\" == \"0\" ] || !! systemctl enable --now registry",
"chmod u+x USD{HOME}/registry.sh",
"USD{HOME}/registry.sh",
"systemctl status",
"systemctl start",
"systemctl stop",
"kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual",
"kcli list network Listing Networks +---------+--------+---------------------+-------+------------------+------+ | Network | Type | Cidr | Dhcp | Domain | Mode | +---------+--------+---------------------+-------+------------------+------+ | default | routed | 192.168.122.0/24 | True | default | nat | | ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | | ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | | ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | +---------+--------+---------------------+-------+------------------+------+",
"kcli info network ipv6 Providing information about network ipv6 cidr: 2620:52:0:1306::/64 dhcp: false domain: dns.base.domain.name mode: nat plan: kvirt type: routed",
"plan: hub-dual force: true version: stable tag: \"<4.x.y>-x86_64\" 1 cluster: \"hub-dual\" dualstack: true domain: dns.base.domain.name api_ip: 192.168.126.10 ingress_ip: 192.168.126.11 service_networks: - 172.30.0.0/16 - fd02::/112 cluster_networks: - 10.132.0.0/14 - fd01::/48 disconnected_url: registry.dns.base.domain.name:5000 disconnected_update: true disconnected_user: dummy disconnected_password: dummy disconnected_operators_version: v4.14 disconnected_operators: - name: metallb-operator - name: lvms-operator channels: - name: stable-4.14 disconnected_extra_images: - quay.io/user-name/trbsht:latest - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 dualstack: true disk_size: 200 extra_disks: [200] memory: 48000 numcpus: 16 ctlplanes: 3 workers: 0 manifests: extra-manifests metal3: true network: dual users_dev: developer users_devpassword: developer users_admin: admin users_adminpassword: admin metallb_pool: dual-virtual-network metallb_ranges: - 192.168.126.150-192.168.126.190 metallb_autoassign: true apps: - users - lvms-operator - metallb-operator vmrules: - hub-bootstrap: nets: - name: ipv6 mac: aa:aa:aa:aa:10:07 - hub-ctlplane-0: nets: - name: ipv6 mac: aa:aa:aa:aa:10:01 - hub-ctlplane-1: nets: - name: ipv6 mac: aa:aa:aa:aa:10:02 - hub-ctlplane-2: nets: - name: ipv6 mac: aa:aa:aa:aa:10:03",
"kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml",
"oc adm -a USD{LOCAL_SECRET_JSON} release extract --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"#!/bin/bash WEBSRV_FOLDER=/opt/srv ROOTFS_IMG_URL=\"USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')\" 1 LIVE_ISO_URL=\"USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')\" 2 mkdir -p USD{WEBSRV_FOLDER}/images curl -Lk USD{ROOTFS_IMG_URL} -o USD{WEBSRV_FOLDER}/images/USD{ROOTFS_IMG_URL##*/} curl -Lk USD{LIVE_ISO_URL} -o USD{WEBSRV_FOLDER}/images/USD{LIVE_ISO_URL##*/} chmod -R 755 USD{WEBSRV_FOLDER}/* ## Run Webserver ps --noheading | grep -q websrv-ai if [[ USD? == 0 ]];then echo \"Launching Registry pod...\" /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 fi",
"apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-{product-version} minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5",
"oc-mirror --v2 --config imagesetconfig.yaml --workspace file://mirror-file docker://<registry>",
"apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2",
"oc mirror -c imagesetconfig.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2",
"oc mirror -c imagesetconfig.yaml file://<file_path> --v2",
"oc mirror -c imagesetconfig.yaml --from file://<file_path> docker://<mirror_registry_url> --v2",
"oc get packagemanifest",
"oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml",
"oc apply -f catalogSource-XXXXXXXX-index.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"registry.dns.base.domain.name:5000/openshift4\" 1 [[registry]] prefix = \"\" location = \"registry.redhat.io/rhacm2\" mirror-by-digest-only = true # #",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config 1 name: agent namespace: multicluster-engine spec: mirrorRegistryRef: name: custom-registries 2 databaseStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 20Gi osImages: 3 - cpuArchitecture: x86_64 4 openshiftVersion: \"4.14\" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img 5 url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso version: 414.92.202308281054-0 - cpuArchitecture: x86_64 openshiftVersion: \"4.15\" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso version: 415.92.202403270524-0",
"oc apply -f agentServiceConfig.yaml",
"assisted-image-service-0 1/1 Running 2 11d 1 assisted-service-668b49548-9m7xw 2/2 Running 5 11d 2",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"spec: additionalTrustedCA: - name: registry-config",
"spec: additionalTrustBundle: - name: user-ca-bundle 1",
"apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1",
"--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace>-<hosted_cluster_name> 1 spec: {} status: {} --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace> 2 spec: {} status: {}",
"--- apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxx kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-pull-secret 2 namespace: <hosted_cluster_namespace> 3 --- apiVersion: v1 kind: Secret metadata: name: sshkey-cluster-<hosted_cluster_name> 4 namespace: <hosted_cluster_namespace> 5 stringData: id_rsa.pub: ssh-rsa xxxxxxxxx --- apiVersion: v1 data: key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-etcd-encryption-key 6 namespace: <hosted_cluster_namespace> 7 type: Opaque",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: capi-provider-role namespace: <hosted_cluster_namespace>-<hosted_cluster_name> 1 2 rules: - apiGroups: - agent-install.openshift.io resources: - agents verbs: - '*'",
"apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: additionalTrustBundle: name: \"user-ca-bundle\" olmCatalogPlacement: guest imageContentSources: 3 - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: - registry.<dns.base.domain.name>:5000/openshift/release 4 - source: quay.io/openshift-release-dev/ocp-release mirrors: - registry.<dns.base.domain.name>:5000/openshift/release-images 5 - mirrors: autoscaling: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns.base.domain.name> 6 etcd: managed: storage: persistentVolume: size: 8Gi restoreSnapshotURL: null type: PersistentVolume managementType: Managed fips: false networking: clusterNetwork: - cidr: 10.132.0.0/14 - cidr: fd01::/48 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 - cidr: fd02::/112 platform: agent: agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name> 7 8 type: Agent pullSecret: name: <hosted_cluster_name>-pull-secret 9 release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:<4.x.y>-x86_64 10 11 secretEncryption: aescbc: activeKey: name: <hosted_cluster_name>-etcd-encryption-key 12 type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: OIDC servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route sshKey: name: sshkey-cluster-<hosted_cluster_name> 13 status: controlPlaneEndpoint: host: \"\" port: 0",
"oc adm release info registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:<4.x.y>-x86_64 | grep hypershift",
"hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8",
"pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8",
"pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Getting image source signatures Copying blob d8190195889e skipped: already exists Copying blob c71d2589fba7 skipped: already exists Copying blob d4dc6e74b6ce skipped: already exists Copying blob 97da74cc6d8f skipped: already exists Copying blob b70007a560c9 done Copying config 3a62961e6e done Writing manifest to image destination Storing signatures 3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde",
"oc apply -f 01-4.14-hosted_cluster-nodeport.yaml",
"NAME READY STATUS RESTARTS AGE capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s dns-operator-6874b577f-9tc6b 1/1 Running 0 94s etcd-0 3/3 Running 0 3m39s hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s machine-approver-846c69f56-jxvfr 1/1 Running 0 92s oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s olm-operator-767f9584c-4lcl2 2/2 Running 0 93s openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s",
"NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available",
"apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <hosted_cluster_name> \\ 1 namespace: <hosted_cluster_namespace> \\ 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false \\ 3 upgradeType: InPlace \\ 4 nodeDrainTimeout: 0s platform: type: Agent release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \\ 5 replicas: 2 6 status: replicas: 2",
"oc apply -f 02-nodepool.yaml",
"NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted-dual hosted 0 False False 4.x.y-x86_64",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted-cluster-namespace>-<hosted_cluster_name> 1 2 spec: pullSecretRef: 3 name: pull-secret sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= 4",
"oc apply -f 03-infraenv.yaml",
"NAMESPACE NAME ISO CREATED AT clusters-hosted-dual hosted 2023-09-11T15:14:10Z",
"kcli delete plan <hosted_cluster_name> 1",
"kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:01\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 -P name=<hosted_cluster_name>-worker0 7",
"kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:02\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 -P name=<hosted_cluster_name>-worker1 7",
"kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:03\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 -P name=<hosted_cluster_name>-worker2 7",
"systemctl restart ksushy",
"+---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | Name | Status | Ip | Source | Plan | Profile | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | hosted-worker0 | down | | | hosted-dual | kvirt | | hosted-worker1 | down | | | hosted-dual | kvirt | | hosted-worker2 | down | | | hosted-dual | kvirt | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+",
"apiVersion: v1 kind: Secret metadata: name: <hosted_cluster_name>-worker0-bmc-secret \\ 1 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \\ 2 data: password: YWRtaW4= \\ 3 username: YWRtaW4= \\ 4 type: Opaque apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <hosted_cluster_name>-worker0 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \\ 5 labels: infraenvs.agent-install.openshift.io: <hosted_cluster_name> \\ 6 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \\ 7 spec: automatedCleaningMode: disabled \\ 8 bmc: disableCertificateVerification: true \\ 9 address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \\ 10 credentialsName: <hosted_cluster_name>-worker0-bmc-secret \\ 11 bootMACAddress: aa:aa:aa:aa:02:11 \\ 12 online: true 13",
"oc apply -f 04-bmh.yaml",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 registering true 2s clusters-hosted hosted-worker1 registering true 2s clusters-hosted hosted-worker2 registering true 2s",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioning true 16s clusters-hosted hosted-worker1 provisioning true 16s clusters-hosted hosted-worker2 provisioning true 16s",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioned true 67s clusters-hosted hosted-worker1 provisioned true 67s clusters-hosted hosted-worker2 provisioned true 67s",
"NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign",
"oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> --replicas 3",
"NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign",
"NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted hosted 3 False False <4.x.y>-x86_64 Minimum availability requires 3 replicas, current 0 available",
"oc apply -f registry-config.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: registry-config namespace: openshift-config data: <mirror_registry>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"spec: additionalTrustedCA: - name: registry-config",
"oc get secret/pull-secret -n openshift-config -o json | jq -r '.data.\".dockerconfigjson\"' | base64 -d > authfile",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=authfile",
"apiVersion: v1 kind: ConfigMap metadata: name: mirror-config namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | [[registry]] location = \"registry.stage.redhat.io\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"<mirror_registry>\" insecure = false [[registry]] location = \"registry.redhat.io/multicluster-engine\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"<mirror_registry>/multicluster-engine\" 1 insecure = false",
"spec: mirrorRegistryRef: name: mirror-config",
"cat icsp.yaml - mirrors: - <mirror_registry>/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror_registry>/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> --etcd-storage-class=<etcd_storage_class> \\ 5 --ssh-key <path_to_ssh_public_key> \\ 6 --namespace <hosted_cluster_namespace> \\ 7 --control-plane-availability-policy SingleReplica --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 8 --additional-trust-bundle <path for cert> \\ 9 --image-content-sources icsp.yaml",
"oc get events -n hypershift",
"LAST SEEN TYPE REASON OBJECT MESSAGE 4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret \"telemeter-client\" not found",
"kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: \"\" installFlagsToRemove: \"--enable-uwm-telemetry-remote-write\"",
"oc apply -f <filename>.yaml",
"oc patch mce <multiclusterengine> --type=merge -p '{\"spec\":{\"overrides\":{\"components\":[{\"name\":\"hypershift\",\"enabled\": true}]}}}'",
"oc get managedclusteraddons -n local-cluster hypershift-addon",
"NAME AVAILABLE DEGRADED PROGRESSING hypershift-addon True False",
"oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m",
"oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m",
"oc edit addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: nodePlacement: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hosted_control_planes/deploying-hosted-control-planes-in-a-disconnected-environment |
7.2. Representations | 7.2. Representations 7.2.1. Representations The API structures resource representations in the following XML document structure: In the context of a virtual machine, the representation appears as follows: 7.2.2. Common Attributes to Resource Representations All resource representations contain a set of common attributes Table 7.2. Common attributes to resource representations Attribute Type Description Properties id GUID Each resource in the virtualization infrastructure contains an id , which acts as a globally unique identifier (GUID). The GUID is the primary method of resource identification. href string The canonical location of the resource as an absolute path. 7.2.3. Common Elements to Resource Representations All resource representations contain a set of common elements. Table 7.3. Common elements to resource representations Element Type Description Properties name string A user-supplied human readable name for the resource. The name is unique across all resources of its type. description string A free-form user-supplied human readable description of the resource. | [
"<resource id=\"resource_id\" href=\"/ovirt-engine/api/collection/resource_id\"> <name>Resource-Name</name> <description>A description of the resource</description> </resource>",
"<vm id=\"5b9bbce5-0d72-4f56-b931-5d449181ee06\" href=\"/ovirt-engine/api/vms/5b9bbce5-0d72-4f56-b931-5d449181ee06\"> <name>RHEL6-Machine</name> <description>Red Hat Enterprise Linux 6 Virtual Machine</description> </vm>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-representations |
Chapter 14. Configuring virtual machine network connections | Chapter 14. Configuring virtual machine network connections For your virtual machines (VMs) to connect over a network to your host, to other VMs on your host, and to locations on an external network, the VM networking must be configured accordingly. To provide VM networking, the RHEL 8 hypervisor and newly created VMs have a default network configuration, which can also be modified further. For example: You can enable the VMs on your host to be discovered and connected to by locations outside the host, as if the VMs were on the same network as the host. You can partially or completely isolate a VM from inbound network traffic to increase its security and minimize the risk of any problems with the VM impacting the host. The following sections explain the various types of VM network configuration and provide instructions for setting up selected VM network configurations. 14.1. Understanding virtual networking The connection of virtual machines (VMs) to other devices and locations on a network has to be facilitated by the host hardware. The following sections explain the mechanisms of VM network connections and describe the default VM network setting. 14.1.1. How virtual networks work Virtual networking uses the concept of a virtual network switch. A virtual network switch is a software construct that operates on a host machine. VMs connect to the network through the virtual network switch. Based on the configuration of the virtual switch, a VM can use an existing virtual network managed by the hypervisor, or a different network connection method. The following figure shows a virtual network switch connecting two VMs to the network: From the perspective of a guest operating system, a virtual network connection is the same as a physical network connection. Host machines view virtual network switches as network interfaces. When the libvirtd service is first installed and started, it creates virbr0 , the default network interface for VMs. To view information about this interface, use the ip utility on the host. By default, all VMs on a single host are connected to the same NAT-type virtual network, named default , which uses the virbr0 interface. For details, see Virtual networking default configuration . For basic outbound-only network access from VMs, no additional network setup is usually needed, because the default network is installed along with the libvirt-daemon-config-network package, and is automatically started when the libvirtd service is started. If a different VM network functionality is needed, you can create additional virtual networks and network interfaces and configure your VMs to use them. In addition to the default NAT, these networks and interfaces can be configured to use one of the following modes: Routed mode Bridged mode Isolated mode Open mode 14.1.2. Virtual networking default configuration When the libvirtd service is first installed on a virtualization host, it contains an initial virtual network configuration in network address translation (NAT) mode. By default, all VMs on the host are connected to the same libvirt virtual network, named default . VMs on this network can connect to locations both on the host and on the network beyond the host, but with the following limitations: VMs on the network are visible to the host and other VMs on the host, but the network traffic is affected by the firewalls in the guest operating system's network stack and by the libvirt network filtering rules attached to the guest interface. VMs on the network can connect to locations outside the host but are not visible to them. Outbound traffic is affected by the NAT rules, as well as the host system's firewall. The following diagram illustrates the default VM network configuration: 14.2. Using the web console for managing virtual machine network interfaces Using the RHEL 8 web console, you can manage the virtual network interfaces for the virtual machines to which the web console is connected. You can: View information about network interfaces and edit them . Add network interfaces to virtual machines , and disconnect or delete the interfaces . 14.2.1. Viewing and editing virtual network interface information in the web console By using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected virtual machine (VM): Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Scroll to Network Interfaces . The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add , Delete , Edit , or Unplug network interfaces. The information includes the following: Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment. Note Generic Ethernet connection is not supported in RHEL 8 and later. Model type - The model of the virtual network interface. MAC Address - The MAC address of the virtual network interface. IP Address - The IP address of the virtual network interface. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual network interface. To edit the virtual network interface settings, Click Edit . The Virtual Network Interface Settings dialog opens. Change the interface type, source, model, or MAC address. Click Save . The network interface is modified. Note Changes to the virtual network interface settings take effect only after restarting the VM. Additionally, MAC address can only be modified when the VM is shut off. Additional resources Viewing virtual machine information by using the web console 14.2.2. Adding and connecting virtual network interfaces in the web console By using the RHEL 8 web console, you can create a virtual network interface and connect a virtual machine (VM) to it. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Scroll to Network Interfaces . The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add , Edit , or Plug network interfaces. Click Plug in the row of the virtual network interface you want to connect. The selected virtual network interface connects to the VM. 14.2.3. Disconnecting and removing virtual network interfaces in the web console By using the RHEL 8 web console, you can disconnect the virtual network interfaces connected to a selected virtual machine (VM). Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Scroll to Network Interfaces . The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add , Delete , Edit , or Unplug network interfaces. Click Unplug in the row of the virtual network interface you want to disconnect. The selected virtual network interface disconnects from the VM. 14.3. Recommended virtual machine networking configurations In many scenarios, the default VM networking configuration is sufficient. However, if adjusting the configuration is required, you can use the command line (CLI) or the RHEL 8 web console to do so. The following sections describe selected VM network setups for such situations. 14.3.1. Configuring externally visible virtual machines by using the command line By default, a newly created VM connects to a NAT-type network that uses virbr0 , the default virtual bridge on the host. This ensures that the VM can use the host's network interface controller (NIC) for connecting to outside networks, but the VM is not reachable from external systems. If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode instead. To do so, attach the VM to a bridge device connected to the hypervisor's physical network device. To use the command line for this, follow the instructions below. Prerequisites A shut-down existing VM with the default NAT setup. The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network by using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP. To obtain the IP configuration of the ethernet interface, use the ip addr utility: Procedure Create and set up a bridge connection for the physical interface on the host. For instructions, see the Configuring a network bridge . Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface. Modify the VM's network to use the created bridged interface. For example, the following sets testguest to use bridge0 . Start the VM. In the guest operating system, adjust the IP and DHCP settings of the system's network interface as if the VM was another physical system in the same network as the hypervisor. The specific steps for this will differ depending on the guest OS used by the VM. For example, if the guest OS is RHEL 8, see Configuring an Ethernet connection . Verification Ensure the newly created bridge is running and contains both the host's physical interface and the interface of the VM. Ensure the VM appears on the same external network as the hypervisor: In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest: From an external system connected to the local network, connect to the VM by using the obtained ID. If the connection works, the network has been configured successfully. Troubleshooting In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible. To work around this problem, you can set destination NAT by using nftables for the VM. Additional resources Configuring externally visible virtual machines by using the web console Virtual networking in bridged mode 14.3.2. Configuring externally visible virtual machines by using the web console By default, a newly created VM connects to a NAT-type network that uses virbr0 , the default virtual bridge on the host. This ensures that the VM can use the host's network interface controller (NIC) for connecting to outside networks, but the VM is not reachable from external systems. If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode instead. To do so, attach the VM to a bridge device connected to the hypervisor's physical network device. To use the RHEL 8 web console for this, follow the instructions below. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . A shut-down existing VM with the default NAT setup. The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network by using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP. To obtain the IP configuration of the ethernet interface, go to the Networking tab in the web console, and see the Interfaces section. Procedure Create and set up a bridge connection for the physical interface on the host. For instructions, see Configuring network bridges in the web console . Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface. Modify the VM's network to use the bridged interface. In the Network Interfaces tab of the VM: Click Add Network Interface In the Add Virtual Network Interface dialog, set: Interface Type to Bridge to LAN Source to the newly created bridge, for example bridge0 Click Add Optional: Click Unplug for all the other interfaces connected to the VM. Click Run to start the VM. In the guest operating system, adjust the IP and DHCP settings of the system's network interface as if the VM was another physical system in the same network as the hypervisor. The specific steps for this will differ depending on the guest OS used by the VM. For example, if the guest OS is RHEL 8, see Configuring an Ethernet connection . Verification In the Networking tab of the host's web console, click the row with the newly created bridge to ensure it is running and contains both the host's physical interface and the interface of the VM. Ensure the VM appears on the same external network as the hypervisor. In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest: From an external system connected to the local network, connect to the VM by using the obtained ID. If the connection works, the network has been configured successfully. Troubleshooting In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible. To work around this problem, you can set destination NAT by using nftables for the VM. Additional resources Configuring externally visible virtual machines by using the command line Virtual networking in bridged mode 14.3.3. Replacing macvtap connections macvtap is a Linux networking device driver that creates a virtual network interface, through which virtual machines have direct access to the physical network interface on the host machine. Using macvtap connections is supported in RHEL 8. However, in comparison to other available virtual machine (VM) networking configurations, macvtap has suboptimal performance and is more difficult to set up correctly. Therefore, if your use case does not explicitly require macvtap, use a different supported networking configuration. If you are using a macvtap mode in your VM, consider instead using the following network configurations: Instead of macvtap bridge mode, use the Linux bridge configuration. Instead of macvtap passthrough mode, use PCI Passthrough . Additional resources Upstream documentation for macvtap 14.4. Types of virtual machine network connections To modify the networking properties and behavior of your VMs, change the type of virtual network or interface the VMs use. The following sections describe the connection types available to VMs in RHEL 8. 14.4.1. Virtual networking with network address translation By default, virtual network switches operate in network address translation (NAT) mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected VMs to use the host machine's IP address for communication with any external network. When the virtual network switch is operating in NAT mode, computers external to the host cannot communicate with the VMs inside the host. Warning Virtual network switches use NAT configured by firewall rules. Editing these rules while the switch is running is not recommended, because incorrect rules may result in the switch being unable to communicate. 14.4.2. Virtual networking in routed mode When using Routed mode, the virtual switch connects to the physical LAN connected to the host machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, the virtual machines (VMs) are all in a single subnet, separate from the host machine. The VM subnet is routed through a virtual switch, which exists on the host machine. This enables incoming connections, but requires extra routing-table entries for systems on the external network. Routed mode uses routing based on the IP address: A common topology that uses routed mode is virtual server hosting (VSH). A VSH provider may have several host machines, each with two physical network connections. One interface is used for management and accounting, the other for the VMs to connect through. Each VM has its own public IP address, but the host machines use private IP addresses so that only internal administrators can manage the VMs. 14.4.3. Virtual networking in bridged mode In most VM networking modes, VMs automatically create and connect to the virbr0 virtual bridge. In contrast, in bridged mode, the VM connects to an existing Linux bridge on the host. As a result, the VM is directly visible on the physical network. This enables incoming connections, but does not require any extra routing-table entries. Bridged mode uses connection switching based on the MAC address: In bridged mode, the VM appear within the same subnet as the host machine. All other physical machines on the same physical network can detect the VM and access it. Bridged network bonding It is possible to use multiple physical bridge interfaces on the hypervisor by joining them together with a bond. The bond can then be added to a bridge, after which the VMs can be added to the bridge as well. However, the bonding driver has several modes of operation, and not all of these modes work with a bridge where VMs are in use. The following bonding modes are usable: mode 1 mode 2 mode 4 In contrast, modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-independent interface (MII) monitoring should be used to monitor bonding modes, as Address Resolution Protocol (ARP) monitoring does not work correctly. For more information about bonding modes, see the Red Hat Knowledgebase solution Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Common scenarios The most common use cases for bridged mode include: Deploying VMs in an existing network alongside host machines, making the difference between virtual and physical machines invisible to the end user. Deploying VMs without making any changes to existing physical network configuration settings. Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a physical network where they must access DHCP services. Connecting VMs to an existing network where virtual LANs (VLANs) are used. A demilitarized zone (DMZ) network. For a DMZ deployment with VMs, Red Hat recommends setting up the DMZ at the physical network router and switches, and connecting the VMs to the physical network by using bridged mode. Additional resources Configuring externally visible virtual machines by using the command line Configuring externally visible virtual machines by using the web console Explanation of bridge_opts parameters 14.4.4. Virtual networking in isolated mode By using isolated mode, virtual machines connected to the virtual switch can communicate with each other and with the host machine, but their traffic will not pass outside of the host machine, and they cannot receive traffic from outside the host machine. Using dnsmasq in this mode is required for basic functionality such as DHCP. 14.4.5. Virtual networking in open mode When using open mode for networking, libvirt does not generate any firewall rules for the network. As a result, libvirt does not overwrite firewall rules provided by the host, and the user can therefore manually manage the VM's firewall rules. 14.4.6. Comparison of virtual machine connection types The following table provides information about the locations to which selected types of virtual machine (VM) network configurations can connect, and to which they are visible. Table 14.1. Virtual machine connection types Connection to the host Connection to other VMs on the host Connection to outside locations Visible to outside locations Bridged mode YES YES YES YES NAT YES YES YES no Routed mode YES YES YES YES Isolated mode YES YES no no Open mode Depends on the host's firewall rules 14.5. Booting virtual machines from a PXE server Virtual machines (VMs) that use Preboot Execution Environment (PXE) can boot and load their configuration from a network. This chapter describes how to use libvirt to boot VMs from a PXE server on a virtual or bridged network. Warning These procedures are provided only as an example. Ensure that you have sufficient backups before proceeding. 14.5.1. Setting up a PXE boot server on a virtual network This procedure describes how to configure a libvirt virtual network to provide Preboot Execution Environment (PXE). This enables virtual machines on your host to be configured to boot from a boot image available on the virtual network. Prerequisites A local PXE server (DHCP and TFTP), such as: libvirt internal server manually configured dhcpd and tftpd dnsmasq Cobbler server PXE boot images, such as PXELINUX configured by Cobbler or manually. Procedure Place the PXE boot images and configuration in /var/lib/tftpboot folder. Set folder permissions: Set folder ownership: Update SELinux context: Shut down the virtual network: Open the virtual network configuration file in your default editor: Edit the <ip> element to include the appropriate address, network mask, DHCP address range, and boot file, where example-pxelinux is the name of the boot image file. <ip address='192.0.2.1' netmask='255.255.255.0'> <tftp root='/var/lib/tftpboot'/> <dhcp> <range start='192.0.2.2' end='192.0.2.254' /> <bootp file=' example-pxelinux '/> </dhcp> </ip> Start the virtual network: Verification Verify that the default virtual network is active: Additional resources Preparing to install from the network by using PXE 14.5.2. Booting virtual machines by using PXE and a virtual network To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a virtual network, you must enable PXE booting. Prerequisites A PXE boot server is set up on the virtual network as described in Setting up a PXE boot server on a virtual network . Procedure Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the default virtual network, into a new 10 GB qcow2 image file: Alternatively, you can manually edit the XML configuration file of an existing VM. To do so, ensure the guest network is configured to use your virtual network and that the network is configured to be the primary boot device: Verification Start the VM by using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server. 14.5.3. Booting virtual machines by using PXE and a bridged network To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a bridged network, you must enable PXE booting. Prerequisites Network bridging is enabled. A PXE boot server is available on the bridged network. Procedure Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the breth0 bridged network, into a new 10 GB qcow2 image file: Alternatively, you can manually edit the XML configuration file of an existing VM. To do so, ensure that the VM is configured with a bridged network and that the network is configured to be the primary boot device: Verification Start the VM by using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server. Additional resources Configuring a network bridge 14.6. Additional resources Configuring and managing networking Attach specific network interface cards as SR-IOV devices to increase VM performance. | [
"ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0",
"ip addr [...] enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s25",
"virt-xml testguest --edit --network bridge=bridge0 Domain 'testguest' defined successfully.",
"virsh start testguest",
"ip link show master bridge0 2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:89:15:40 brd ff:ff:ff:ff:ff:ff",
"ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0",
"ssh [email protected] [email protected]'s password: Last login: Mon Sep 24 12:05:36 2019 root~#*",
"ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0",
"ssh [email protected] [email protected]'s password: Last login: Mon Sep 24 12:05:36 2019 root~#*",
"chmod -R a+r /var/lib/tftpboot",
"chown -R nobody: /var/lib/tftpboot",
"chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot",
"virsh net-destroy default",
"virsh net-edit default",
"<ip address='192.0.2.1' netmask='255.255.255.0'> <tftp root='/var/lib/tftpboot'/> <dhcp> <range start='192.0.2.2' end='192.0.2.254' /> <bootp file=' example-pxelinux '/> </dhcp> </ip>",
"virsh net-start default",
"virsh net-list Name State Autostart Persistent --------------------------------------------------- default active no no",
"virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10",
"<interface type='network' > <mac address='52:54:00:66:79:14'/> <source network='default'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>",
"virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10",
"<interface type='bridge' > <mac address='52:54:00:5a:ad:cb'/> <source bridge='breth0'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/configuring-virtual-machine-network-connections_configuring-and-managing-virtualization |
Chapter 10. Package and Driver Changes | Chapter 10. Package and Driver Changes The list of included packages and system drivers undergoes regular changes in Red Hat Enterprise Linux releases. This is done for a number of reasons: packages and drivers are added or updated in the operating system to provide new functionality, or when the packages and drivers represent out-of-date hardware and are removed; the upstream project for the packages and drivers might no longer be maintained, or hardware-specific packages and drivers are no longer supported by a hardware vendor and are removed. This chapter lists the new and updated packages and drivers in Red Hat Enterprise Linux 6, as well as those that have been deprecated and discontinued (removed). 10.1. System Configuration Tools Changes 10.1.1. system-config-bind The system-config-bind tool has been deprecated and removed without replacement. Editing the name server configuration manually using the named.conf file is recommended in Red Hat Enterprise Linux 6. Comprehensive BIND documentation is installed as part of the bind package in /usr/share/doc/bind-x.y.z . Also, sample configurations can be found in the /usr/share/doc/bind-x.y.z/sample directory. The system-config-bind tool from versions does, however, generate standard BIND configuration, so depending on your environment it is possible to migrate to the version of BIND found in Red Hat Enterprise Linux 6 by moving old configuration files to the correct location and performing sufficient testing. 10.1.2. system-config-boot The system-config-boot tool allowed graphical configuration of the GRUB bootloader. In Red Hat Enterprise Linux 6 it has been deprecated and removed without replacement. The default GRUB configuration is sufficient for many users, however if manual changes are required, the boot configuration can be accessed and changed in the grub.conf file, located in the /boot/grub directory. Red Hat Enterprise Linux 6 uses version 1 of GRUB, also known as GRUB legacy. Full documentation for configuring GRUB can be found at the GRUB homepage: http://www.gnu.org/software/grub/ . 10.1.3. system-config-cluster The system-config-cluster tool has been deprecated and removed without replacement. Using ricci and luci (from the Conga project) is recommended. 10.1.4. system-config-display The system-config-display tool has been replaced by XRandr configuration tools as found in both supported desktops: GNOME and KDE. There is no explicit configuration file ( xorg.conf ) in the default X server installation as display management is now done dynamically using one of the following menu options: GNOME: System Preferences Display (or the system-config-display command). KDE: System Settings Computer Administration Display The command line utility ( xrandr ) can be also used for display configuration. See the xrandr --help command or the manual page using the man xrandr command for further details. 10.1.5. system-config-httpd The system-config-httpd tool has been deprecated and removed without replacement. Users must configure web servers manually. Configuration can be done in the /etc/httpd directory. The main configuration file is located at /etc/httpd/conf/httpd.conf . This file is well documented with detailed comments in the file for most server configurations; however if required, the complete Apache web server documentation is shipped in the httpd-manual package. 10.1.6. system-config-lvm The system-config-lvm tool has been deprecated. Management of logical volumes can be performed using the gnome-disk-util or the lvm tools. 10.1.7. system-config-netboot The system-config-netboot tool has been deprecated and removed without replacement. Using Red Hat Network Satellite is recommended. 10.1.8. system-config-nfs The system-config-nfs tool has been deprecated and removed without replacement. Users must set up NFS server configuration manually. 10.1.9. system-config-rootpassword The system-config-rootpassword tool has been replaced by the system-config-users tool - a powerful user management and configuration tool. The root password can be set in the system-config-users tool by unchecking the "Hide system users and groups" option in the Preferences dialog. The root user will now be shown in the main listing, and the password can be modified like any other user. 10.1.10. system-config-samba The system-config-samba tool has been deprecated and removed without replacement. Users must set up SMB server configuration manually. 10.1.11. system-config-securitylevel The system-config-securitylevel tool has been removed. The system-config-firewall tool is recommended for firewall configuration. 10.1.12. system-config-soundcard The system-config-soundcard tool has been removed. Sound card detection and configuration is done automatically. 10.1.13. system-config-switchmail The system-config-switchmail tool has been deprecated and removed without replacement. Postfix is the preferred and default MTA (Mail Transfer Agent) in Red Hat Enterprise Linux 6. If you are using another MTA, it must be configured manually according to its specific configuration files and techniques. 10.1.14. Preupgrade Assistant The Preupgrade Assistant ( preupg ) checks for potential problems you might encounter with an upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 before making any changes to your system. This helps you assess your chances of successfully upgrading to Red Hat Enterprise Linux 7 before the actual upgrade process begins. The Preupgrade Assistant assesses the system for possible in-place upgrade limitations, such as package removals, incompatible obsoletes, name changes, deficiencies in some configuration file compatibilities, and so on. It then provides the following: System analysis report with proposed solutions for any detected migration issues. Data that could be used for "cloning" the system, if the in-place upgrade is not suitable. Post-upgrade scripts to finish more complex issues after the in-place upgrade. Your system remains unchanged except for the information and logs stored by the Preupgrade Assistant . For detailed instructions on how to obtain and use the Preupgrade Assistant , see https://access.redhat.com/site/node/637583/ . 10.1.15. Red Hat Upgrade Tool The new Red Hat Upgrade Tool is used after the Preupgrade Assistant , and handles the three phases of the upgrade process: Red Hat Upgrade Tool fetches packages and an upgrade image from a disk or server, prepares the system for the upgrade, and reboots the system. The rebooted system detects that upgrade packages are available and uses systemd and yum to upgrade packages on the system. Red Hat Upgrade Tool cleans up after the upgrade and reboots the system into the upgraded operating system. Both network and disk based upgrades are supported. For detailed instructions on how to upgrade your Red Hat Enterprise Linux 6 system to Red Hat Enterprise Linux 7, see https://access.redhat.com/site/node/637583/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/chap-migration_guide-package_changes |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_quarkus_reference/pr01 |
Chapter 1. Overview | Chapter 1. Overview 1.1. Major changes in RHEL 8.10 Installer and image creation Key highlights for RHEL image builder: You can create different partitioning modes, such as auto-lvm , lvm , and raw . You can customize tailoring options for a profile and add it to your blueprint customizations by using selected and unselected options, to add and remove rules. For more information, see New features - Installer and image creation . Security SCAP Security Guide 0.1.72 contains updated CIS profiles, a profile aligned with the PCI DSS policy version 4.0, and profiles for the latest DISA STIG policies. The Linux kernel cryptographic API ( libkcapi ) 1.4.0 introduces new tools and options. Notably, with the new -T option, you can specify target file names in hash-sum calculations. The stunnel TLS/SSL tunneling service 5.71 changes the behavior of OpenSSL 1.1 and later versions in FIPS mode. Besides this change, version 5.71 provides many new features such as support for modern PostgreSQL clients. The OpenSSL TLS toolkit now contains API-level protections against Bleichenbacher-like attacks on the RSA PKCS #1 v1.5 decryption process. See New features - Security for more information. Dynamic programming languages, web and database servers Later versions of the following Application Streams are now available: Python 3.12 Ruby 3.3 PHP 8.2 nginx 1.24 MariaDB 10.11 PostgreSQL 16 The following components have been upgraded: Git to version 2.43.0 Git LFS to version 3.4.1 See New features - Dynamic programming languages, web and database servers for more information. Identity Management Identity Management (IdM) in RHEL 8.10 introduces delegating user authentication to external identity providers (IdPs) that support the OAuth 2 Device Authorization Grant flow. This is now a fully supported feature. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. For more information, see New Features - Identity Management Containers Notable changes include: The podman farm build command for creating multi-architecture container images is available as a Technology Preview. Podman now supports containers.conf modules to load a predetermined set of configurations. The Container Tools packages have been updated. Podman v4.9 RESTful API now displays data of progress when you pull or push an image to the registry. SQLite is now fully supported as a default database backend for Podman. Containerfile now supports multi-line HereDoc instructions. pasta as a network name has been deprecated. The BoltDB database backend has been deprecated. The container-tools:4.0 module has been deprecated. The Container Network Interface (CNI) network stack is deprecated and will be removed in a future release. See New features - Containers for more information. 1.2. In-place upgrade and OS conversion In-place upgrade from RHEL 7 to RHEL 8 The possible in-place upgrade paths currently are: From RHEL 7.9 to RHEL 8.8 and RHEL 8.10 on the 64-bit Intel, IBM POWER 8 (little endian), and IBM Z architectures From RHEL 7.9 to RHEL 8.8 and RHEL 8.10 on systems with SAP HANA on the 64-bit Intel architecture. For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 7 to RHEL 8 . For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 . For information regarding how Red Hat supports the in-place upgrade process, see the In-place upgrade Support Policy . Notable enhancements include: New logic has been implemented to determine the expected states of the systemd services after the upgrade. Locally stored DNF repositories can now be used for the in-place upgrade. You can now configure DNF to be able to upgrade by using proxy. Issues with performing the in-place upgrade with custom DNF repositories accessed by using HTTPS have been fixed. If the /etc/pki/tls/openssl.cnf configuration file has been modified, the file is now replaced with the target default OpenSSL configuration file during the upgrade to prevent issues after the upgrade. See the pre-upgrade report for more information. In-place upgrade from RHEL 6 to RHEL 8 It is not possible to perform an in-place upgrade directly from RHEL 6 to RHEL 8. However, you can perform an in-place upgrade from RHEL 6 to RHEL 7 and then perform a second in-place upgrade to RHEL 8. For more information, see Upgrading from RHEL 6 to RHEL 7 . In-place upgrade from RHEL 8 to RHEL 9 Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 using the Leapp utility are provided by the document Upgrading from RHEL 8 to RHEL 9 . Major differences between RHEL 8 and RHEL 9 are documented in Considerations in adopting RHEL 9 . Conversion from a different Linux distribution to RHEL If you are using Alma Linux 8, CentOS Linux 8, Oracle Linux 8, or Rocky Linux 8, you can convert your operating system to RHEL 8 using the Red Hat-supported Convert2RHEL utility. For more information, see Converting from an RPM-based Linux distribution to RHEL . If you are using CentOS Linux 7 or Oracle Linux 7, you can convert your operating system to RHEL and then perform an in-place upgrade to RHEL 8. For information regarding how Red Hat supports conversions from other Linux distributions to RHEL, see the Convert2RHEL Support Policy document . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Product Life Cycle Checker Kickstart Generator Kickstart Converter Red Hat Enterprise Linux Upgrade Helper Red Hat Satellite Upgrade Helper Red Hat Code Browser JVM Options Configuration Tool Red Hat CVE Checker Red Hat Product Certificates Load Balancer Configuration Tool Yum Repository Configuration Helper Red Hat Memory Analyzer Kernel Oops Analyzer Red Hat Product Errata Advisory Checker Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 8 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 8. Major differences between RHEL 7 and RHEL 8 , including removed functionality, are documented in Considerations in adopting RHEL 8 . Instructions on how to perform an in-place upgrade from RHEL 7 to RHEL 8 are provided by the document Upgrading from RHEL 7 to RHEL 8 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is now available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page. Note Release notes include links to access the original tracking tickets. Private tickets have no links and instead feature this footnote. [1] [1] Release notes include links to access the original tracking tickets. Private tickets have no links and instead feature this footnote. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/overview |
Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On | Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On Red Hat Enterprise Linux for SAP Solutions 8 Red Hat Customer Content Services | [
"search fence-agents",
"subscription-manager release Release: 8.2 [root:~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.2 (Ootpa) [root:~]#",
"subscription-manager register",
"subscription-manager list --available --matches=\"rhel-8-for-x86_64-sap-solutions-rpms\"",
"subscription-manager attach --pool=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"yum repolist | grep sap-solution rhel-8-for-x86_64-sap-solutions-rpms RHEL for x86_64 - SAP Solutions (RPMs)",
"subscription-manager repos --enable=rhel-8-for-x86_64-sap-solutions-rpms --enable=rhel-8-for-x86_64-highavailability-rpms",
"yum update -y",
"nmcli con add con-name eth1 ifname eth1 autoconnect yes type ethernet ip4 192.168.101.101/24 nmcli con add con-name eth2 ifname eth2 autoconnect yes type ethernet ip4 192.168.102.101/24",
"cat << EOF >> /etc/hosts 10.0.1.21 dc1hana01.example.com dc1hana01 10.0.1.22 dc1hana02.example.com dc1hana02 10.0.1.23 dc1hana03.example.com dc1hana03 10.0.1.24 dc1hana04.example.com dc1hana04 10.0.1.31 dc2hana01.example.com dc2hana01 10.0.1.32 dc2hana02.example.com dc2hana02 10.0.1.33 dc2hana03.example.com dc2hana03 10.0.1.34 dc2hana04.example.com dc2hana04 10.0.1.41 majoritymaker.example.com majoritymaker EOF",
"mkdir -p /usr/sap",
"mkfs -t xfs -b size=4096 /dev/sdb",
"echo \"/dev/sdb /usr/sap xfs defaults 1 6\" >> /etc/fstab",
"mount /usr/sap",
"yum install -y nfs-utils",
"mkdir -p /hana/{shared,data,log} cat <<EOF >> /etc/fstab 10.0.1.61:/data/dc1/shared /hana/shared nfs4 defaults 0 0 10.0.1.61:/data/dc1/data /hana/data nfs4 defaults 0 0 10.0.1.61:/data/dc1/log /hana/log nfs4 defaults 0 0 EOF",
"mount -a",
"mkdir -p /hana/{shared,data,log} cat <<EOF >> /etc/fstab 10.0.1.62:/data/dc2/shared /hana/shared nfs4 defaults 0 0 10.0.1.62:/data/dc2/data /hana/data nfs4 defaults 0 0 10.0.1.62:/data/dc2/log /hana/log nfs4 defaults 0 0 EOF",
"mount -a",
"hostnamectl set-hostname dc1hana01",
"hostname <hostname> [root:~]# hostname -s <hostname> [root:~]# hostname -f <hostname>.example.com [root:~]# hostname -d example.com",
"localectl set-locale LANG=en_US.UTF-8",
"yum -y install chrony [root:~]# systemctl stop chronyd.service",
"grep ^server /etc/chrony.conf server 0.de.pool.ntp.org server 1.de.pool.ntp.org",
"systemctl enable chronyd.service [root:~]# systemctl start chronyd.service [root:~]# systemctl restart systemd-timedated.service",
"systemctl status chronyd.service chronyd.service enabled [root:~]# chronyc sources 210 Number of sources = 3 MS Name/IP address Stratum Poll Reach LastRx Last sample ===================================================================== ^* 0.de.pool.ntp.org 2 8 377 200 -2659ns[-3000ns] +/- 28ms ^-de.pool.ntp.org 2 8 377 135 -533us[ -533us] +/- 116ms ^-ntp2.example.com 2 9 377 445 +14ms[ +14ms] +/- 217ms",
"adduser sapadm --uid 996 [root:~]# groupadd sapsys --gid 79 [root:~]# passwd sapadm",
"export TEMPDIR=USD(mktemp -d) [root:~]# export INSTALLDIRHOSTAGENT=/install/HANA/DATA_UNITS/HDB_SERVER_LINUX_X86_64/ [root:~]# systemctl disable abrtd [root:~]# systemctl disable abrt-ccpp [root:~]# cp -rp USD{INSTALLDIRHOSTAGENT}/server/HOSTAGENT.TGZ USDTEMPDIR/ cd USDTEMPDIR [root:~]# tar -xzvf HOSTAGENT.TGZ [root:~]# cd global/hdb/saphostagent_setup/ [root:~]# ./saphostexec -install",
"export MYHOSTNAME=USD(hostname) [root:~]# export SSLPASSWORD=Us3Your0wnS3cur3Password [root:~]# export LD_LIBRARY_PATH=/usr/sap/hostctrl/exe/ [root:~]# export SECUDIR=/usr/sap/hostctrl/exe/sec [root:~]# cd /usr/sap/hostctrl/exe [root:~]# mkdir /usr/sap/hostctrl/exe/sec [root:~]# /usr/sap/hostctrl/exe/sapgenpse gen_pse -p SAPSSLS.pse -x USDSSLPASSWORD -r /tmp/USD{MYHOSTNAME}-csr.p10 \"CN=USDMYHOSTNAME\" [root:~]# /usr/sap/hostctrl/exe/sapgenpse seclogin -p SAPSSLS.pse -x USDSSLPASSWORD -O sapadm chown sapadm /usr/sap/hostctrl/exe/sec/SAPSSLS.pse [root:~]# /usr/sap/hostctrl/exe/saphostexec -restart *",
"netstat -tulpen | grep sapstartsrv tcp 0 0 0.0.0.0:50014 0.0.0.0:* LISTEN 1002 84028 4319/sapstartsrv tcp 0 0 0.0.0.0:50013 0.0.0.0:* LISTEN 1002 47542 4319/sapstartsrv",
"netstat -tulpen | grep 1129 tcp 0 0 0.0.0.0:1129 0.0.0.0:* LISTEN 996 25632 1345/sapstartsrv",
"./hdblcm --action=configure_internal_network",
"/hana/shared/RH1/hdblcm/hdblcm",
"INSTALLDIR=/install/51053381/DATA_UNITS HDB_SERVER_LINUX_X86_64/ [root:~]# cd USDINSTALLDIR [root:~]# ./hdblcm --dump_configfile_template=/tmp/templateFile",
"cat /tmp/templateFile.xml | ./hdblcm \\ --batch \\ --sid=RH1 \\ --number=10 \\ --action=install \\ --hostname=dc1hana01 \\ --addhosts=dc1hana02:role=worker,dc1hana03:role=worker,dc1hana04:role =standby \\ --install_hostagent \\ --system_usage=test \\ --sapmnt=/hana/shared \\ --datapath=/hana/data \\ --logpath=/hana/log \\ --root_user=root \\ --workergroup=default \\ --home=/usr/sap/RH1/home \\ --userid=79 \\ --shell=/bin/bash \\ --groupid=79 \\ --read_password_from_stdin=xml \\ --internal_network=192.168.101.0/24 \\ --remote_execution=saphostagent",
"cat /tmp/templateFile.xml | ./hdblcm \\ --batch \\ --sid=RH1 \\ --number=10 \\ --action=install \\ --hostname=dc2hana01 \\ --addhosts=dc2hana02:role=worker,dc2hana03:role=worker,dc2hana04:role =standby \\ --install_hostagent \\ --system_usage=test \\ --sapmnt=/hana/shared \\ --datapath=/hana/data \\ --logpath=/hana/log \\ --root_user=root \\ --workergroup=default \\ --home=/usr/sap/RH1/home \\ --userid=79 \\ --shell=/bin/bash \\ --groupid=79 \\ --read_password_from_stdin=xml \\ --internal_network=192.168.101.0/24 \\ --remote_execution=saphostagent",
"su - rh1adm /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList 10.04.2019 08:38:21 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN dc1hana03,10,51013,51014,0.3,HDB|HDB_STANDBY, GREEN dc1hana02,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN dc1hana04,10,51013,51014,0.3,HDB|HDB_WORKER, GREEN rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | --------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | dc1hana01 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc1hana02 | yes | ok | | | 2 | 2 | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc1hana03 | yes | ok | | | 2 | 2 | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc1hana04 | yes | ignore | | | 0 | 0 | default | default | master 2 | slave | standby | standby | standby | standby | default | - | rh1adm@dc1hana01: HDB info USER PID PPID %CPU VSZ RSS COMMAND rh1adm 31321 31320 0.0 116200 2824 -bash rh1adm 32254 31321 0.0 113304 1680 \\_ /bin/sh /usr/sap/RH1/HDB10/HDB info rh1adm 32286 32254 0.0 155356 1868 \\_ ps fx -U rh1adm -o user:8,pid:8,ppid:8,pcpu:5,vsz:10,rss:10,args rh1adm 27853 1 0.0 23916 1780 sapstart pf=/hana/shared/RH1/profile/RH1_HDB10_dc1hana01 rh1adm 27863 27853 0.0 262272 32368 \\_ /usr/sap/RH1/HDB10/dc1hana01/trace/hdb.sapRH1_HDB10 -d -nw -f /usr/sap/RH1/HDB10/dc1hana01/daemon.ini pf=/usr/sap/RH1/SYS/profile/RH1_HDB10_dc1hana01 rh1adm 27879 27863 53.0 9919108 6193868 \\_ hdbnameserver rh1adm 28186 27863 0.7 1860416 268304 \\_ hdbcompileserver rh1adm 28188 27863 65.8 3481068 1834440 \\_ hdbpreprocessor rh1adm 28228 27863 48.2 9431440 6481212 \\_ hdbindexserver -port 31003 rh1adm 28231 27863 2.1 3064008 930796 \\_ hdbxsengine -port 31007 rh1adm 28764 27863 1.1 2162344 302344 \\_ hdbwebdispatcher rh1adm 27763 1 0.2 502424 23376 /usr/sap/RH1/HDB10/exe/sapstartsrvpf=/hana/shared/RH1/profile/RH1_HDB10_dc1hana01 -D -u rh1adm",
"Do this as root [root@dc1hana01]# mkdir -p /hana/shared/backup/ [root@dc1hana01]# chown rh1adm /hana/shared/backup/ [root@dc1hana01]# su - rh1adm [rh1adm@dc1hana01]% hdbsql -i 10 -u SYSTEM -d SYSTEMDB \"BACKUP DATA USING FILE ('/hana/shared/backup/')\" [rh1adm@dc1hana01]% hdbsql -i 10 -u SYSTEM -d RH1 \"BACKUP DATA USING FILE ('/hana/shared/backup/')\"",
"su - rh1adm [rh1adm@dc1hana01]% hdbnsutil -sr_enable --name=DC1 nameserver is active, proceeding ... successfully enabled system as system replication source site done.",
"scp -rp /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT root@dc2hana01:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH 1.DAT [root@dc1hana01]# scp -rp /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY root@dc2hana01:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1 .KEY",
"su - rh1adm [rh1adm@dc1hana01]% hdbnsutil -sr_register --name=DC2 \\ --remoteHost=dc1hana03 --remoteInstance=10 \\ --replicationMode=sync --operationMode=logreplay \\ --online # Start System [rh1adm@dc1hana01]% /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function StartSystem",
"GetInstanceList: rh1adm@dc2hana01:/usr/sap/RH1/HDB10> /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList 01.04.2019 14:17:28 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc2hana02, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc2hana01, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc2hana04, 10, 51013, 51014, 0.3, HDB|HDB_STANDBY, GREEN dc2hana03, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN Check landscapeHostConfiguration: rh1adm@dc2hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | | | | | | | | | | | | | | | | | | | dc2hana01 | yes | ok | | | 1 | | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc2hana02 | yes | ok | | | 2 | | default | default | slave | slave | worker | slave | worker | worker | default | default | | dc2hana03 | yes | ok | | | 3 | | default | default | master 3 | slave | worker | slave | worker | worker | default | default | | dc2hana04 | yes | ignore | | | 0 | 0 | default | default | master 2 | slave | standby | standby | standby | standby | default | - | overall host status: ok",
"rh1adm@dc1hana01: /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList rh1adm@dc1hana01:/hana/shared/backup> /usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 55 26.03.2019 12:41:13 GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana02, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana03, 10, 51013, 51014, 0.3, HDB|HDB_WORKER, GREEN dc1hana04, 10, 51013, 51014, 0.3, HDB|HDB_STANDBY, GREEN rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh landscapeHostConfiguration.py | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | | --------- | ------ | ------ | -------- | ------ | --------- | --------- | -------- | -------- | ---------- | ---------- | ----------- | ----------- | ------- | ------- | ------- | ------- | | dc1hana01 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | | dc1hana02 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | | dc1hana03 | yes | ok | | | 3 | 3 | default | default | slave | slave | worker | slave | worker | worker | default | default | | dc1hana04 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 56 standby | standby | standby | default | - | overall host status: ok rh1adm@dc1hana01:/usr/sap/RH1/HDB10> # Show Systemreplication state rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | --------- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | dc1hana01 | 31001 | nameserver | 1 | 1 | DC1 | dc2hana01 | 31001 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31007 | xsengine | 2 | 1 | DC1 | dc2hana01 | 31007 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31003 | indexserver | 3 | 1 | DC1 | dc2hana01 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana03 | 31003 | indexserver | 5 | 1 | DC1 | dc2hana03 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana02 | 31003 | indexserver | 4 | 1 | DC1 | dc2hana02 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State Red Hat Enterprise Linux HA Solution for SAP HANA Scale Out and System Replication Page 57 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ mode: PRIMARY site id: 1 site name: DC1 rh1adm@dc1hana01:/usr/sap/RH1/HDB10>",
"rh1adm@dc1hana01:/usr/sap/RH1/HDB10> HDBSettings.sh systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | --------- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | dc1hana01 | 31001 | nameserver | 1 | 1 | DC1 | dc2hana01 | 31001 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31007 | xsengine | 2 | 1 | DC1 | dc2hana01 | 31007 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana01 | 31003 | indexserver | 3 | 1 | DC1 | dc2hana01 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana03 | 31003 | indexserver | 5 | 1 | DC1 | dc2hana03 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | | RH1 | dc1hana02 | 31003 | indexserver | 4 | 1 | DC1 | dc2hana02 | 31003 | 2 | DC2 | YES | SYNC | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ mode: PRIMARY site id: 1 site name: DC1 rh1adm@dc1hana01:/usr/sap/RH1/HDB10>",
"subscription-manager repos --list-enabled +----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-8-for-x86_64-baseos-e4s-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-8-for-x86_64-sap-solutions-e4s-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - SAP Solutions - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: ansible-2.8-for-rhel-8-x86_64-rpms Repo Name: Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-8-for-x86_64-highavailability-e4s-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - High Availability - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 Repo ID: rhel-8-for-x86_64-appstream-e4s-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream - Update Services for SAP Solutions (RPMs) Repo URL: <Your repo URL> Enabled: 1 repolist Updating Subscription Management repositories. repo id repo name advanced-virt-for-rhel-8-x86_64-rpms Advanced Virtualization for RHEL 8 x86_64 (RPMs) ansible-2.8-for-rhel-8-x86_64-rpms Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs) rhel-8-for-x86_64-appstream-e4s-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream - Update Services for SAP Solutions (RPMs) rhel-8-for-x86_64-baseos-e4s-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Update Services for SAP Solutions (RPMs) rhel-8-for-x86_64-highavailability-e4s-rpms Red Hat Enterprise Linux 8 for x86_64 - High Availability - Update Services for SAP Solutions (RPMs) rhel-8-for-x86_64-sap-netweaver-e4s-rpms Red Hat Enterprise Linux 8 for x86_64 - SAP NetWeaver - Update Services for SAP Solutions (RPMs) rhel-8-for-x86_64-sap-solutions-e4s-rpms Red Hat Enterprise Linux 8 for x86_64 - SAP Solutions - Update Services for SAP Solutions (RPMs)",
"yum -y install pcs pacemaker fence-agents",
"yum install fence-agents-sbd fence-agents-ipmilan",
"firewall-cmd --permanent --add-service=high-availability [root]# firewall-cmd --add-service=high-availability",
"passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.",
"systemctl start [root]# pcsd.service systemctl enable pcsd.service",
"pcshost auth -u hacluster -p <clusterpassword> dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker Username: hacluster Password: majoritymaker: Authorized dc1hana03: Authorized dc1hana02: Authorized dc1hana01: Authorized dc2hana01: Authorized dc2hana02: Authorized dc1hana04: Authorized dc2hana04: Authorized dc2hana03: Authorized",
"pcs cluster setup scale_out_hsr majoritymaker addr=10.10.10.41 addr=192.168.102.100 dc1hana01 addr=10.10.10.21 addr=192.168.102.101 dc1hana02 addr=10.10.10.22 addr=192.168.102.102 dc1hana03 addr=10.10.10.23 addr=192.168.102.103 dc1hana04 addr=10.10.10.24 addr=192.168.102.104 dc2hana01 addr=10.10.10.31 addr=192.168.102.201 dc2hana02 addr=10.10.10.33 addr=192.168.102.202 dc2hana03 addr=10.10.10.34 addr=192.168.212.203 dc2hana04 addr=10.10.10.10 addr=192.168.102.204 Destroying cluster on nodes: dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker dc1hana01: Stopping Cluster (pacemaker) dc1hana04: Stopping Cluster (pacemaker) dc1hana03: Stopping Cluster (pacemaker) dc2hana04: Stopping Cluster (pacemaker) dc2hana01: Stopping Cluster (pacemaker) dc2hana03: Stopping Cluster (pacemaker) majoritymaker: Stopping Cluster (pacemaker) dc2hana02: Stopping Cluster (pacemaker) dc1hana02: Stopping Cluster (pacemaker) dc2hana01: Successfully destroyed cluster dc2hana03: Successfully destroyed cluster dc1hana04: Successfully destroyed cluster dc1hana03: Successfully destroyed cluster dc2hana02: Successfully destroyed cluster dc1hana01: Successfully destroyed cluster dc1hana02: Successfully destroyed cluster dc2hana04: Successfully destroyed cluster majoritymaker: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'dc1hana01', 'dc1hana02', 'dc1hana03', 'dc1hana04', 'dc2hana01', 'dc2hana02', 'dc2hana03', 'dc2hana04', 'majoritymaker' dc1hana01: successful distribution of the file 'pacemaker_remote authkey' dc1hana04: successful distribution of the file 'pacemaker_remote authkey' dc1hana03: successful distribution of the file 'pacemaker_remote authkey' dc2hana01: successful distribution of the file 'pacemaker_remote authkey' dc2hana02: successful distribution of the file 'pacemaker_remote authkey' dc2hana03: successful distribution of the file 'pacemaker_remote authkey' dc2hana04: successful distribution of the file 'pacemaker_remote authkey' majoritymaker: successful distribution of the file 'pacemaker_remote authkey' dc1hana02: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes dc1hana01: Succeeded dc1hana02: Succeeded dc1hana03: Succeeded dc1hana04: Succeeded dc2hana01: Succeeded dc2hana02: Succeeded dc2hana03: Succeeded dc2hana04: Succeeded majoritymaker: Succeeded Starting cluster on nodes: dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker dc2hana01: Starting Cluster dc1hana03: Starting Cluster dc1hana01: Starting Cluster dc1hana02: Starting Cluster dc1hana04: Starting Cluster majoritymaker: Starting Cluster dc2hana02: Starting Cluster dc2hana03: Starting Cluster dc2hana04: Starting Cluster Synchronizing pcsd certificates on nodes dc1hana01, dc1hana02, dc1hana03, dc1hana04, dc2hana01, dc2hana02, dc2hana03, dc2hana04, majoritymaker majoritymaker: Success dc1hana03: Success dc1hana02: Success dc1hana01: Success dc2hana01: Success dc2hana02: Success dc2hana03: Success dc2hana04: Success dc1hana04: Success Restarting pcsd on the nodes in order to reload the certificates dc1hana04: Success dc1hana03: Success dc2hana03: Success majoritymaker: Success dc2hana04: Success dc1hana02: Success dc1hana01: Success dc2hana01: Success dc2hana02: Success",
"pcs cluster enable --all dc1hana01: Cluster Enabled dc1hana02: Cluster Enabled dc1hana03: Cluster Enabled dc1hana04: Cluster Enabled dc2hana01: Cluster Enabled dc2hana02: Cluster Enabled dc2hana03: Cluster Enabled dc2hana04: Cluster Enabled majoritymaker: Cluster Enabled",
"pcs stonith create <stonith id> <fence_agent> ipaddr=<fence device> login=<login> passwd=<passwd>",
"pcs status Cluster name: hanascaleoutsr Stack: corosync Current DC: dc2hana01 (version 1.1.18-11.el7_5.4-2b07d5c5a9) - partition with quorum Last updated: Tue Mar 26 13:03:01 2019 Last change: Tue Mar 26 13:02:54 2019 by root via cibadmin on dc1hana01 9 nodes configured 1 resource configured Online: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker ] Full list of resources: fencing (stonith:fence_rhevm): Started dc1hana01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled",
"yum install resource-agents-sap-hana-scaleout",
"root# yum repolist \"rhel-x86_64-server-sap-hana-<version>\" RHEL Server SAP HANA (v. <version> for 64-bit <architecture>).",
"su - rh1adm [rh1adm@dc1hana01]% sapcontrol -nr 10 -function StopSystem *[rh1adm@dc1hana01]% cat <<EOF >> /hana/shared/RH1/global/hdb/custom/config/global.ini [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /usr/share/SAPHanaSR-ScaleOut execution_order = 1 [trace] ha_dr_saphanasr = info EOF",
"rh1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_rh1_glob_srHook -v * -t crm_config -s SAPHanaSR rh1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_rh1_gsh -v * -l reboot -t crm_config -s SAPHanaSR Defaults:rh1adm !requiretty",
"Execute the following commands on one HANA node in every datacenter [root]# su - rh1adm [rh1adm]% sapcontrol -nr 10 -function StartSystem",
"[rh1adm@dc1hana01]% cdtrace [rh1adm@dc1hana01]% awk '/ha_dr_SAPHanaSR.*crm_attribute/ { printf \"%s %s %s %s\\n\",USD2,USD3,USD5,USD16 }' nameserver_ * 2018-05-04 12:34:04.476445 ha_dr_SAPHanaSR SFAIL 2018-05-04 12:53:06.316973 ha_dr_SAPHanaSR SOK",
"pcs property set maintenance-mode=true",
"pcs resource create rsc_SAPHanaTopology_RH1_HDB10 SAPHanaTopology SID=RH1 InstanceNumber=10 op methods interval=0s timeout=5 op monitor interval=10 timeout=600 clone clone-max=6 clone-node-max=1 interleave=true --disabled",
"root# pcs status --full",
"pcs resource create rsc_SAPHana_RH1_HDB10 SAPHanaController SID=RH1 InstanceNumber=10 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true op demote interval=0s timeout=320 op methods interval=0s timeout=5 op monitor interval=59 role=\"Promoted\" timeout=700 op monitor interval=61 role=\"Unpromoted\" timeout=700 op promote interval=0 timeout=3600 op start interval=0 timeout=3600 op stop interval=0 timeout=3600 promotable clone-max=6 promoted-node-max=1 interleave=true --disabled",
"/usr/sap/hostctrl/exe/sapcontrol -nr 10 -function GetSystemInstanceList GetSystemInstanceList OK hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus dc1hana01,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana02,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana03,10,51013,51014,0.3,HDB|HDB_WORKER,GREEN dc1hana04,10,51013,51014,0.3,HDB|HDB_STANDBY, GREEN",
"pcs resource create rsc_ip_SAPHana_RH1_HDB10 ocf:heartbeat:IPaddr2 ip=10.0.0.250 op monitor interval=\"10s\" timeout=\"20s\"",
"pcs constraint order start rsc_SAPHanaTopology_RH1_HDB10-clone then start rsc_SAPHana_RH1_HDB10-clone",
"pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB10 with promoted rsc_SAPHana_RH1_HDB10-clone",
"pcs constraint location add topology-avoids-majoritymaker rsc_SAPHanaTopology_RH1_HDB10-clone majoritymaker -INFINITY resource-discovery=never [root@dc1hana01]# pcs constraint location add hana-avoids-majoritymaker rsc_SAPHana_RH1_HDB10-clone majoritymaker -INFINITY resource-discovery=never",
"pcs resource enable <resource-name>",
"pcs property set maintenance-mode=false",
"pcs status Cluster name: hanascaleoutsr Stack: corosync Current DC: dc2hana01 (version 1.1.18-11.el7_5.4-2b07d5c5a9) - partition with quorum Last updated: Tue Mar 26 14:26:38 2019 Last change: Tue Mar 26 14:25:47 2019 by root via crm_attribute on dc1hana01 9 nodes configured 20 resources configured Online: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 majoritymaker ] Full list of resources: fencing (stonith:fence_rhevm): Started dc1hana01 Clone Set: rsc_SAPHanaTopology_RH1_HDB10-clone [rsc_SAPHanaTopology_RH1_HDB10] Started: [ dc1hana01 dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 ] Stopped: [ majoritymaker ] Clone Set: msl_rsc_SAPHana_RH1_HDB10 [rsc_SAPHana_RH1_HDB10] (promotable): Promoted: [ dc1hana01 ] Unpromoted: [ dc1hana02 dc1hana03 dc1hana04 dc2hana01 dc2hana02 dc2hana03 dc2hana04 ] Stopped: [ majoritymaker ] rsc_ip_SAPHana_RH1_HDB10 (ocf::heartbeat:IPaddr2): Started dc1hana01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@dc1hana01]# SAPHanaSR-showAttr --sid=RH1 Global prim srHook sync_state ------------------------------ global DC1 SOK SOK Sit lpt lss mns srr --------------------------------- DC1 1553607125 4 dc1hana01 P DC2 30 4 dc2hana01 S H clone_state roles score site -------------------------------------------------------- 1 PROMOTED promoted1 promoted:worker promoted 150 DC1 2 DEMOTED promoted2:slave:worker:slave 110 DC1 3 DEMOTED slave:slave:worker:slave -10000 DC1 4 DEMOTED promoted3:slave:standby:standby 115 DC1 5 DEMOTED promoted2 promoted:worker promoted 100 DC2 6 DEMOTED promoted3:slave:worker:slave 80 DC2 7 DEMOTED slave:slave:worker:slave -12200 DC2 8 DEMOTED promoted1:slave:standby:standby 80 DC2 9 :shtdown:shtdown:shtdown",
"root# pcs resource create rsc_ip2_SAPHana_RH1_HDB10 ocf:heartbeat:IPaddr2 ip=10.0.0.251 op monitor interval=\"10s\" timeout=\"20s",
"root# pcs constraint location rsc_ip_SAPHana_RH1_HDB10 rule score=500 role=master hana_rh1_roles eq \"master1:master:worker:master\" and hana_rh1_clone_state eq PROMOTED",
"root# pcs constraint location rsc_ip2_SAPHana_RH1_HDB10 rule score=50 id=vip_slave_master_constraint hana_rh1_roles eq 'master1:master:worker:master'",
"root# pcs constraint order promote rsc_SAPHana_RH1_HDB10-clone then start rsc_ip_SAPHana_RH1_HDB10",
"root# pcs constraint order start rsc_ip_SAPHana_RH1_HDB10 then start rsc_ip2_SAPHana_RH1_HDB10",
"root# pcs constraint colocation add rsc_ip_SAPHana_RH1_HDB10 with Master rsc_SAPHana_RH1_HDB10-clone 2000",
"root# pcs constraint colocation add rsc_ip2_SAPHana_RH1_HDB10 with Slave rsc_SAPHana_RH1_HDB10-clone 5",
"root# watch pcs status",
"sidadm% sapcontrol -nr USD{TINSTANCE} -function StopSystem HDB",
"sidadm% sapcontrol -nr USD{TINSTANCE} -function StartSystem HDB",
"pcs node attribute Node Attributes: saphdb1: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb1 saphdb2: hana_hdb_gra=2.0 hana_hdb_site=DC1 hana_hdb_vhost=sapvirthdb2 saphdb3: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb3 saphdb4: hana_hdb_gra=2.0 hana_hdb_site=DC2 hana_hdb_vhost=sapvirthdb4",
"pcs resource create nfs_hana_shared_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc1 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef67.fs-0879de29a7fbb752d.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc1_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_shared_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef78.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/shared directory=/hana/shared fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode1 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs resource create nfs_hana_log2_dc2 ocf:heartbeat:Filesystem device=svm-012ab34cd45ef678.fs-088e3f66bf4f22c33.fsx.ap-southeast-2.amazonaws.com:/sap_hana_dc2_log_shared/lognode2 directory=/hana/log/HDB fstype=nfs options=defaults,suid op monitor interval=60s on-fail=fence timeout=20s OCF_CHECK_LEVEL=20 clone [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_HDB_SITE=DC1N2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_HDB_SITE=DC2N1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_HDB_SITE=DC2N2 [root@dc1hana01]# pcs node attribute sap-dc1-dbn1 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc1-dbn2 NFS_SHARED_HDB_SITE=DC1 [root@dc1hana01]# pcs node attribute sap-dc2-dbn1 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs node attribute sap-dc2-dbn2 NFS_SHARED_HDB_SITE=DC2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc1-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC1 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc1-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC1N2 [root@dc1hana01]# pcs constraint location nfs_hana_shared_dc2-clone rule resource-discovery=never score=-INFINITY NFS_SHARED_HDB_SITE ne DC2 [root@dc1hana01]# pcs constraint location nfs_hana_log_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N1 [root@dc1hana01]# pcs constraint location nfs_hana_log2_dc2-clone rule resource-discovery=never score=-INFINITY NFS_HDB_SITE ne DC2N2 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc1 *[root@dc1hana01]# pcs resource enable nfs_hana_log_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc1 [root@dc1hana01]# pcs resource enable nfs_hana_shared_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log_dc2 [root@dc1hana01]# pcs resource enable nfs_hana_log2_dc2 [root@dc1hana01]# pcs resource update nfs_hana_shared_dc1-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_shared_dc2-clone meta clone-max=2 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log_dc2-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc1-clone meta clone-max=1 interleave=true [root@dc1hana01]# pcs resource update nfs_hana_log2_dc2-clone meta clone-max=1 interleave=true",
"root@saphdb1:/etc/systemd/system/resource-agents-deps.target.d# more sap_systemd_hdb_00.conf [Unit] Description=Pacemaker SAP resource HDB_00 needs the SAP Host Agent service Wants=saphostagent.service After=saphostagent.service Wants=SAPHDB_00.service After=SAPHDB_00.service",
"systemctl daemon-reload",
"[ha_dr_provider_chksrv] path = /usr/share/SAPHanaSR-ScaleOut execution_order = 2 action_on_lost = stop [trace] ha_dr_saphanasr = info ha_dr_chksrv = info",
"[ rh1adm]USD hdbnsutil -reloadHADRProviders",
"[rh1adm]USD cdtrace [rh1adm]USD cat nameserver_chksrv.trc",
"pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3",
"pcs constraint remove rsc_SAPHana_HDB_HDB00",
"pcs stonith fence <nodename>",
"sidadm% HDB kill",
"export ListInstances=USD(/usr/sap/hostctrl/exe/saphostctrl -function ListInstances| head -1 ) export sid=USD(echo \"USDListInstances\" |cut -d \" \" -f 5| tr [A-Z] [a-z]) export SID=USD(echo USDsid | tr [a-z] [A-Z]) export Instance=USD(echo \"USDListInstances\" |cut -d \" \" -f 7 ) alias crmm='watch -n 1 crm_mon -1Arf' alias crmv='watch -n 1 /usr/local/bin/crmmv' alias clean=/usr/local/bin/cleanup alias cglo='su - USD{sid}adm -c cglo' alias cdh='cd /usr/lib/ocf/resource.d/heartbeat' alias vhdbinfo=\"vim /usr/sap/USD{SID}/home/hdbinfo;dcp /usr/sap/USD{SID}/home/hdbinfo\" alias gtr='su - USD{sid}adm -c gtr' alias hdb='su - USD{sid}adm -c hdb' alias hdbi='su - USD{sid}adm -c hdbi' alias hgrep='history | grep USD1' alias hri='su - USD{sid}adm -c hri' alias hris='su - USD{sid}adm -c hris' alias killnode=\"echo 'b' > /proc/sysrq-trigger\" alias lhc='su - USD{sid}adm -c lhc' alias python='/usr/sap/USD{SID}/HDBUSD{Instance}/exe/Python/bin/python' alias pss=\"watch 'pcs status --full | egrep -e Node\\|master\\|clone_state\\|roles'\" alias srstate='su - USD{sid}adm -c srstate' alias shr='watch -n 5 \"SAPHanaSR-monitor --sid=USD{SID}\"' alias sgsi='su - USD{sid}adm -c sgsi' alias spl='su - USD{sid}adm -c spl' alias srs='su - USD{sid}adm -c srs' alias sapstart='su - USD{sid}adm -c sapstart' alias sapstop='su - USD{sid}adm -c sapstop' alias sapmode='df -h /;su - USD{sid}adm -c sapmode' alias smm='pcs property set maintenance-mode=true' alias usmm='pcs property set maintenance-mode=false' alias tma='tmux attach -t 0:' alias tmkill='tmux killw -a' alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Setting master-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register\\ *|WAITING4LPA\\|EXCLUDE as posible takeover node|SAPHanaSR|failed|USD{HOSTNAME} |PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmss='tail -1000f /var/log/messages | grep -v systemd | egrep -s \"secondary with sync status|Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance} |sr_register|WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR |failed|USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmm='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|W aitforStopped |FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"| grep -v systemd' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register|WAITING4LPA |PROMOTED|DEMOTED|UNDEFINED|ERROR|Warning|mast er_walk|SWAIT |WaitforStopped|FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"' alias vih='vim /usr/lib/ocf/resource.d/heartbeat/SAPHanaStart' alias switch1='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb1' alias switch3='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3' alias switch0='pcs constraint remove location-rsc_SAPHana_HDB_HDB00-clone alias switchl='pcs constraint location | grep pcs resource | grep promotable | awk \"{ print USD4 }\"` | grep Constraint| awk \"{ print USDNF }\"' alias scl='pcs constraint location |grep \" Constraint\"'",
"alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR|failed |USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT\"' alias sapstart='sapcontrol -nr USD{TINSTANCE} -function StartSystem HDB;hdbi' alias sapstop='sapcontrol -nr USD{TINSTANCE} -function StopSystem HDB;hdbi' alias sapmode='watch -n 5 \"hdbnsutil -sr_state --sapcontrol=1 |grep site.\\*Mode\"' alias sapprim='hdbnsutil -sr_stateConfiguration| grep -i primary' alias sgsi='watch sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList' alias spl='watch sapcontrol -nr USD{TINSTANCE} -function GetProcessList' alias splh='watch \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | grep hdbdaemon\"' alias srs=\"watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py * *; echo Status \\USD?'\" alias cdb=\"cd /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/backup\" alias srstate='watch -n 10 hdbnsutil -sr_state' alias hdb='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver \"' alias hdbi='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver ;sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList \"' alias hgrep='history | grep USD1' alias vglo=\"vim /usr/sap/USDSAPSYSTEMNAME/SYS/global/hdb/custom/config/global.ini\" alias vgloh=\"vim /hana/shared/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/USD{HOSTNAME}/global.ini\" alias hri='hdbcons -e hdbindexserver \"replication info\"' alias hris='hdbcons -e hdbindexserver \"replication info\" | egrep -e \"SiteID|ReplicationStatus_\"' alias gtr='watch -n 10 /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/getTakeoverRecommendation.py --sapcontrol=1' alias lhc='/usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/landscapeHostConfiguration.py ;echo USD?' alias reg1='hdbnsutil -sr_register --remoteHost=hana07 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC1 --operationMode=logreplay --online' alias reg2='hdbnsutil -sr_register --remoteHost=hana08 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC2 --operationMode=logreplay --online' alias reg3='hdbnsutil -sr_register --remoteHost=hana09 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC3 --operationMode=logreplay --online' PS1=\"\\[\\033[m\\][\\[\\e[1;33m\\]\\u\\[\\e[1;33m\\]\\[\\033[m\\]@\\[\\e[1;36m\\]\\h\\[\\033[m\\]: \\[\\e[0m\\]\\[\\e[1;32m\\]\\W\\[\\e[0m\\]]# \"",
"alias pss='pcs status --full | egrep -e \"Node|master|clone_state|roles\"' [root@saphdb2:~]# pss Node List: Node Attributes: * Node: saphdb1 (1): * hana_hdb_clone_state : PROMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 150 * Node: saphdb2 (2): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -10000 * Node: saphdb3 (3): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 100 * Node: saphdb4 (4): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -12200",
"pcs resource unmanage SAPHana_RH1_HDB10-clone",
"pcs resource refresh SAPHana_RH1_HDB10-clone",
"pcs resource manage SAPHana_RH1_HDB10-clone",
"pcs resource move SAPHana_RH1_HDB10-clone",
"pcs resource clear SAPHana_RH1_HDB10-clone"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html-single/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/index |
4.329. tsclient | 4.329. tsclient 4.329.1. RHBA-2011:1662 - tsclient bug fix update An updated tsclient package that fixes one bug is now available for Red Hat Enterprise Linux 6. The tsclient utility is a GTK2 front end that makes it easy to use the Remote Desktop Protocol client (rdesktop) and vncviewer utilities. Bug Fix BZ# 667684 Previously, the tsclient utility did not provide a "Client Hostname" option. Users experienced the "No valid license available" error message from the terminal server after many successful connections. To solve this problem an option to enter client hostname information, "-n", has been added. Now, the user can enter client hostname information and it will be passed to the rdesktop client. All users of tsclient are advised to upgrade to this updated package which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tsclient |
Chapter 2. Tools for migrating from JBoss EAP XP 2.0.x servers to JBoss EAP XP 3.0.0 servers | Chapter 2. Tools for migrating from JBoss EAP XP 2.0.x servers to JBoss EAP XP 3.0.0 servers You can choose one of the following tools to upgrade and migrate your JBoss EAP XP 2.0.x product to the JBoss EAP XP 3.0.0 product: Migration Toolkit for Applications (MTA) JBoss Server Migration Tool 2.1. Use the JBoss Server Migration Tool to migrate your server configurations Use the JBoss Server Migration Tool when updating your server configuration to include the new features and settings of JBoss EAP XP 3.0.0. You can keep your existing JBoss EAP XP 2.0.x server configuration, provided that JBoss EAP XP 3.0.0 supports the configurations. The JBoss Server Migration Tool reads your existing JBoss EAP XP 2.0.x server configuration files and adds any new required subsystems to these files. The tool also updates existing subsystem configurations with new features, and removes any obsolete subsystem configurations. You can use the JBoss Server Migration Tool to migrate standalone servers and servers in a managed domain for your JBoss EAP XP 3.0.0 configuration. JBoss EAP XP 3.0.0 includes the JBoss Server Migration Tool, so you do not need to download a file and install the tool. Issue the jboss-server-migration script, which is located in the EAP_HOME/bin directory, to start the tool. Additional resources For more information about how to configure and run the JBoss Server Migration Tool, see Running the JBoss Server Migration Tool in the Using the JBoss Server Migration Tool guide. 2.2. Use the Migration Toolkit for Applications to analyze applications for migration The Migration Toolkit for Applications (MTA) includes extensible and customizable rule-based tools that simplify migration of Jakarta applications. You can use the toolkit to analyze an application's APIs, technologies, and architectures. The toolkit provides reports for the application you plan to migrate from JBoss EAP XP 2.0.x to JBoss EAP XP 3.0.0. You can use MTA to analyze standard JBoss EAP applications and bootable JAR applications. The MTA reports output the following information: Detailed explanations of all required migration changes. Whether the change is mandatory or optional. Whether the change is complex or simple. Links to code that requires a migration update. Hints and links to information for helping you complete the required migration changes. An estimate of the level of effort for each migration issue found and the total estimated effort to migrate the application . You can also use MTA to analyze the code and architecture of your JBoss EAP XP 2.0.x applications before you migrate them to JBoss EAP XP 3.0.0. The MTA rule set for migrating applications from JBoss EAP XP 2.0.x to JBoss EAP XP 3.0.0 reports on XML descriptors, specific application code, and parameters, that you need to replace with alternative configurations when migrating to JBoss EAP XP 3.0.0. Additional resources For information about the bootable JAR, see About the bootable JAR in the Using MicroProfile with JBoss EAP XP 3.0.0 guide. For more information about the Migration Toolkit for Applications to analyze your JBoss EAP XP 2.0.x applications, see the Product documentation for Migration Toolkit for Applications . For more information about using Migration Toolkit for Applications with the management CLI, see Run the CLI in the Migration Toolkit for Applications CLI Guide . For more information about using Migration Toolkit for Applications with the management console, see Using the Web Console to Analyze Applications in the Migration Toolkit for Applications Web Console Guide . 2.3. Upgrades from JBoss EAP 7.3 and earlier JBoss EAP XP 3.0.0 is only supported on JBoss EAP 7.4. If you operate servers on JBoss EAP 7.3 or earlier and want to use JBoss EAP XP, upgrade the servers to JBoss EAP 7.4. Complete any necessary migration before attempting to install JBoss EAP XP. Additional resources For information about migrating your server configuration, see Server Configuration Changes in the Migration Guide . 2.4. MicroProfile application migration MicroProfile 4.0 is based on the Jakarta EE 8 platform. Although Jakarta EE 8 is API backward compatible with Java EE 8, Jakarta EE 8 dependencies replace Java EE 8 dependencies for all MicroProfile specifications. MicroProfile 4.0 includes updates to all the major MicroProfile specifications. The following specifications include API incompatible changes for MicroProfile 4.0: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile Metrics MicroProfile OpenAPI You must update your applications that use these specifications to the latest Jakarta EE 8 specifications. You can update your applications to MicroProfile 4.0 by choosing one of the following methods: Adding the MicroProfile 4.0 dependency to your project's pom.xml file. Using the JBoss EAP XP BOMs to import supported artifacts to the JBoss EAP XP dependency management of your project's pom.xml file. Additional resources For more information about MicroProfile 4.0 and options for updating your applications to use MicroProfile 4.0, see MicroProfile 4.0 on GitHub . 2.5. Bootable JAR application migration Before you migrate a JBoss EAP XP 2.0.0 bootable JAR application to JBoss EAP XP 3.0.0, you might need to update your JBoss EAP XP bootable JAR Maven plug-in configuration. For JBoss EAP XP 3.0.0, the extraServerContentDirs configuration element replaces the extraServerContent configuration element. This element naming replacement aligns with the pre-existing extra-server-content-dirs element. If you used the extraServerContent element in your JBoss EAP Maven plug-in configuration, you must replace this element with the extraServerContentDirs element. If you used the extra-server-content-dirs element then you do not need to make any configuration changes. Additional resources For more information about the extra-server-content-dirs configuration element, see Enabling HTTP authentication for bootable JAR with a CLI script in the Using MicroProfile with JBoss EAP XP 3.0.0 guide. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/jboss_eap_xp_upgrade_and_migration_guide/tools-migrating-applications-from-eap-versions_default |
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] | Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] Description ClusterServiceVersion is a Custom Resource of type ClusterServiceVersionSpec . Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. status object ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. 3.1.1. .spec Description ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. Type object Required displayName install Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. apiservicedefinitions object APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. cleanup object Cleanup specifies the cleanup behaviour when the CSV gets deleted customresourcedefinitions object CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. description string Description of the operator. Can include the features, limitations or use-cases of the operator. displayName string The name of the operator in display format. icon array The icon for this operator. icon[] object install object NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. installModes array InstallModes specify supported installation types installModes[] object InstallMode associates an InstallModeType with a flag representing if the CSV supports it keywords array (string) A list of keywords describing the operator. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. links array A list of links related to the operator. links[] object maintainers array A list of organizational entities maintaining the operator. maintainers[] object maturity string minKubeVersion string nativeAPIs array nativeAPIs[] object GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling provider object The publishing entity behind the operator. relatedImages array List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. relatedImages[] object replaces string The name of a CSV this one replaces. Should match the metadata.Name field of the old CSV. selector object Label selector for related resources. skips array (string) The name(s) of one or more CSV(s) that should be skipped in the upgrade graph. Should match the metadata.Name field of the CSV that should be skipped. This field is only used during catalog creation and plays no part in cluster runtime. version string webhookdefinitions array webhookdefinitions[] object WebhookDescription provides details to OLM about required webhooks 3.1.2. .spec.apiservicedefinitions Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Property Type Description owned array owned[] object APIServiceDescription provides details to OLM about apis provided via aggregation required array required[] object APIServiceDescription provides details to OLM about apis provided via aggregation 3.1.3. .spec.apiservicedefinitions.owned Description Type array 3.1.4. .spec.apiservicedefinitions.owned[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.5. .spec.apiservicedefinitions.owned[].actionDescriptors Description Type array 3.1.6. .spec.apiservicedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.7. .spec.apiservicedefinitions.owned[].resources Description Type array 3.1.8. .spec.apiservicedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.9. .spec.apiservicedefinitions.owned[].specDescriptors Description Type array 3.1.10. .spec.apiservicedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.11. .spec.apiservicedefinitions.owned[].statusDescriptors Description Type array 3.1.12. .spec.apiservicedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.13. .spec.apiservicedefinitions.required Description Type array 3.1.14. .spec.apiservicedefinitions.required[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.15. .spec.apiservicedefinitions.required[].actionDescriptors Description Type array 3.1.16. .spec.apiservicedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.17. .spec.apiservicedefinitions.required[].resources Description Type array 3.1.18. .spec.apiservicedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.19. .spec.apiservicedefinitions.required[].specDescriptors Description Type array 3.1.20. .spec.apiservicedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.21. .spec.apiservicedefinitions.required[].statusDescriptors Description Type array 3.1.22. .spec.apiservicedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.23. .spec.cleanup Description Cleanup specifies the cleanup behaviour when the CSV gets deleted Type object Required enabled Property Type Description enabled boolean 3.1.24. .spec.customresourcedefinitions Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Property Type Description owned array owned[] object CRDDescription provides details to OLM about the CRDs required array required[] object CRDDescription provides details to OLM about the CRDs 3.1.25. .spec.customresourcedefinitions.owned Description Type array 3.1.26. .spec.customresourcedefinitions.owned[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.27. .spec.customresourcedefinitions.owned[].actionDescriptors Description Type array 3.1.28. .spec.customresourcedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.29. .spec.customresourcedefinitions.owned[].resources Description Type array 3.1.30. .spec.customresourcedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.31. .spec.customresourcedefinitions.owned[].specDescriptors Description Type array 3.1.32. .spec.customresourcedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.33. .spec.customresourcedefinitions.owned[].statusDescriptors Description Type array 3.1.34. .spec.customresourcedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.35. .spec.customresourcedefinitions.required Description Type array 3.1.36. .spec.customresourcedefinitions.required[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.37. .spec.customresourcedefinitions.required[].actionDescriptors Description Type array 3.1.38. .spec.customresourcedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.39. .spec.customresourcedefinitions.required[].resources Description Type array 3.1.40. .spec.customresourcedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.41. .spec.customresourcedefinitions.required[].specDescriptors Description Type array 3.1.42. .spec.customresourcedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.43. .spec.customresourcedefinitions.required[].statusDescriptors Description Type array 3.1.44. .spec.customresourcedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.45. .spec.icon Description The icon for this operator. Type array 3.1.46. .spec.icon[] Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 3.1.47. .spec.install Description NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. Type object Required strategy Property Type Description spec object StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. strategy string 3.1.48. .spec.install.spec Description StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. Type object Required deployments Property Type Description clusterPermissions array clusterPermissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy deployments array deployments[] object StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create permissions array permissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy 3.1.49. .spec.install.spec.clusterPermissions Description Type array 3.1.50. .spec.install.spec.clusterPermissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.51. .spec.install.spec.clusterPermissions[].rules Description Type array 3.1.52. .spec.install.spec.clusterPermissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.53. .spec.install.spec.deployments Description Type array 3.1.54. .spec.install.spec.deployments[] Description StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create Type object Required name spec Property Type Description label object (string) Set is a map of label:value. It implements Labels. name string spec object DeploymentSpec is the specification of the desired behavior of the Deployment. 3.1.55. .spec.install.spec.deployments[].spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector object Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object The deployment strategy to use to replace existing pods with new ones. template object Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". 3.1.56. .spec.install.spec.deployments[].spec.selector Description Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.57. .spec.install.spec.deployments[].spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.58. .spec.install.spec.deployments[].spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.59. .spec.install.spec.deployments[].spec.strategy Description The deployment strategy to use to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 3.1.60. .spec.install.spec.deployments[].spec.strategy.rollingUpdate Description Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. Type object Property Type Description maxSurge integer-or-string The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable integer-or-string The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 3.1.61. .spec.install.spec.deployments[].spec.template Description Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". Type object Property Type Description metadata `` Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 3.1.62. .spec.install.spec.deployments[].spec.template.spec Description Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object If specified, the pod's scheduling constraints automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup overhead integer-or-string Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.63. .spec.install.spec.deployments[].spec.template.spec.affinity Description If specified, the pod's scheduling constraints Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 3.1.64. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 3.1.65. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.66. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.67. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.68. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.69. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.70. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.71. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.72. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.73. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.74. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.75. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.76. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.77. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.78. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.79. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.80. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.81. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.82. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.83. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.84. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.85. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.86. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.87. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.88. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.89. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.90. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.91. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.92. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.93. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.94. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.95. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.96. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.97. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.98. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.99. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.100. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.101. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.102. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.103. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.104. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.105. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.106. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.107. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.108. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.109. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.110. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.111. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.112. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.113. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.114. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.115. .spec.install.spec.deployments[].spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.116. .spec.install.spec.deployments[].spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.117. .spec.install.spec.deployments[].spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.118. .spec.install.spec.deployments[].spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.119. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.120. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.121. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.122. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.123. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.124. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.125. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.126. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.127. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.128. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.129. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.130. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.131. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.132. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.134. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.135. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.136. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.137. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.138. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.139. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.140. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.141. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.142. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.143. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.144. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.145. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.146. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.147. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.148. .spec.install.spec.deployments[].spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.149. .spec.install.spec.deployments[].spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.150. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.151. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.152. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.153. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.154. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.155. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.156. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.157. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.158. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.159. .spec.install.spec.deployments[].spec.template.spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.160. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.161. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.162. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.163. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.164. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.165. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.166. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.167. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.168. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.169. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.170. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.171. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.172. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.173. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.174. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.175. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.176. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.177. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.178. .spec.install.spec.deployments[].spec.template.spec.dnsConfig Description Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.179. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.180. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.181. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.182. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Lifecycle is not allowed for ephemeral containers. livenessProbe object Probes are not allowed for ephemeral containers. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probes are not allowed for ephemeral containers. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. securityContext object Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. startupProbe object Probes are not allowed for ephemeral containers. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.183. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.184. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.185. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.186. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.187. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.188. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.189. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.190. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.191. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.192. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.193. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.194. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle is not allowed for ephemeral containers. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.195. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.196. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.197. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.198. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.199. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.200. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.201. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.202. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.203. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.204. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.205. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.206. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.207. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.208. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.209. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.210. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.211. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.212. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.213. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.214. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.215. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.216. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.217. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.218. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.219. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.220. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.221. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.222. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.223. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.224. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.225. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources Description Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.226. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.227. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.228. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext Description Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.229. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.230. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.231. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.232. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.233. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.234. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.235. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.236. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.237. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.238. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.239. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.240. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.241. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.242. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.243. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.244. .spec.install.spec.deployments[].spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 3.1.245. .spec.install.spec.deployments[].spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.246. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.247. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.248. .spec.install.spec.deployments[].spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.249. .spec.install.spec.deployments[].spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.250. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.251. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.252. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.253. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.254. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.255. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.256. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.257. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.258. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.259. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.260. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.261. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.262. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.263. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.264. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.265. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.266. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.267. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.268. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.269. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.270. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.271. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.272. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.273. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.274. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.275. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.276. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.277. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.278. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.279. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.280. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.281. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.282. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.283. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.284. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.285. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.286. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.287. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.288. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.289. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.290. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.291. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.292. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.293. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.294. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.295. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.296. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.297. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.298. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.299. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.300. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.301. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.302. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.303. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.304. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.305. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.306. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.307. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.308. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.309. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.310. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.311. .spec.install.spec.deployments[].spec.template.spec.os Description Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.312. .spec.install.spec.deployments[].spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.313. .spec.install.spec.deployments[].spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.314. .spec.install.spec.deployments[].spec.template.spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 3.1.315. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object Source describes where to find the ResourceClaim. 3.1.316. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[].source Description Source describes where to find the ResourceClaim. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 3.1.317. .spec.install.spec.deployments[].spec.template.spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 3.1.318. .spec.install.spec.deployments[].spec.template.spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 3.1.319. .spec.install.spec.deployments[].spec.template.spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.320. .spec.install.spec.deployments[].spec.template.spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.321. .spec.install.spec.deployments[].spec.template.spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.322. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.323. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.324. .spec.install.spec.deployments[].spec.template.spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.325. .spec.install.spec.deployments[].spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.326. .spec.install.spec.deployments[].spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.327. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.328. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 3.1.329. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.330. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.331. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.332. .spec.install.spec.deployments[].spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.333. .spec.install.spec.deployments[].spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 3.1.334. .spec.install.spec.deployments[].spec.template.spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.335. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.336. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.337. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.338. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.339. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.340. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.341. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.342. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.343. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.344. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.345. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.346. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.347. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.348. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.349. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.350. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.351. .spec.install.spec.deployments[].spec.template.spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.352. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.353. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.354. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.355. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.356. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.357. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.358. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.359. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.360. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.361. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.362. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.363. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.364. .spec.install.spec.deployments[].spec.template.spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.365. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 3.1.366. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.367. .spec.install.spec.deployments[].spec.template.spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.368. .spec.install.spec.deployments[].spec.template.spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.369. .spec.install.spec.deployments[].spec.template.spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.370. .spec.install.spec.deployments[].spec.template.spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.371. .spec.install.spec.deployments[].spec.template.spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.372. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.373. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.374. .spec.install.spec.deployments[].spec.template.spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.375. .spec.install.spec.deployments[].spec.template.spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.376. .spec.install.spec.deployments[].spec.template.spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.377. .spec.install.spec.deployments[].spec.template.spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.378. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.379. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.380. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 3.1.381. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.382. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.383. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.384. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.385. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.386. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.387. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.388. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.389. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 3.1.390. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.391. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.392. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.393. .spec.install.spec.deployments[].spec.template.spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.394. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.395. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.396. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.397. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.398. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.399. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.400. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.401. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.402. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.403. .spec.install.spec.deployments[].spec.template.spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.1.404. .spec.install.spec.permissions Description Type array 3.1.405. .spec.install.spec.permissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.406. .spec.install.spec.permissions[].rules Description Type array 3.1.407. .spec.install.spec.permissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.408. .spec.installModes Description InstallModes specify supported installation types Type array 3.1.409. .spec.installModes[] Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required supported type Property Type Description supported boolean type string InstallModeType is a supported type of install mode for CSV installation 3.1.410. .spec.links Description A list of links related to the operator. Type array 3.1.411. .spec.links[] Description Type object Property Type Description name string url string 3.1.412. .spec.maintainers Description A list of organizational entities maintaining the operator. Type array 3.1.413. .spec.maintainers[] Description Type object Property Type Description email string name string 3.1.414. .spec.nativeAPIs Description Type array 3.1.415. .spec.nativeAPIs[] Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group kind version Property Type Description group string kind string version string 3.1.416. .spec.provider Description The publishing entity behind the operator. Type object Property Type Description name string url string 3.1.417. .spec.relatedImages Description List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. Type array 3.1.418. .spec.relatedImages[] Description Type object Required image name Property Type Description image string name string 3.1.419. .spec.selector Description Label selector for related resources. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.420. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.421. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.422. .spec.webhookdefinitions Description Type array 3.1.423. .spec.webhookdefinitions[] Description WebhookDescription provides details to OLM about required webhooks Type object Required admissionReviewVersions generateName sideEffects type Property Type Description admissionReviewVersions array (string) containerPort integer conversionCRDs array (string) deploymentName string failurePolicy string FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. generateName string matchPolicy string MatchPolicyType specifies the type of match policy. objectSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. reinvocationPolicy string ReinvocationPolicyType specifies what type of policy the admission hook uses. rules array rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffectClass specifies the types of side effects a webhook may have. targetPort integer-or-string timeoutSeconds integer type string WebhookAdmissionType is the type of admission webhooks supported by OLM webhookPath string 3.1.424. .spec.webhookdefinitions[].objectSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.425. .spec.webhookdefinitions[].objectSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.426. .spec.webhookdefinitions[].objectSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.427. .spec.webhookdefinitions[].rules Description Type array 3.1.428. .spec.webhookdefinitions[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 3.1.429. .status Description ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. Type object Property Type Description certsLastUpdated string Last time the owned APIService certs were updated certsRotateAt string Time the owned APIService certs will rotate cleanup object CleanupStatus represents information about the status of cleanup while a CSV is pending deletion conditions array List of conditions, a history of state transitions conditions[] object Conditions appear in the status as a record of state transitions on the ClusterServiceVersion lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Current condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' requirementStatus array The status of each requirement for this CSV requirementStatus[] object 3.1.430. .status.cleanup Description CleanupStatus represents information about the status of cleanup while a CSV is pending deletion Type object Property Type Description pendingDeletion array PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. pendingDeletion[] object ResourceList represents a list of resources which are of the same Group/Kind 3.1.431. .status.cleanup.pendingDeletion Description PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. Type array 3.1.432. .status.cleanup.pendingDeletion[] Description ResourceList represents a list of resources which are of the same Group/Kind Type object Required group instances kind Property Type Description group string instances array instances[] object kind string 3.1.433. .status.cleanup.pendingDeletion[].instances Description Type array 3.1.434. .status.cleanup.pendingDeletion[].instances[] Description Type object Required name Property Type Description name string namespace string Namespace can be empty for cluster-scoped resources 3.1.435. .status.conditions Description List of conditions, a history of state transitions Type array 3.1.436. .status.conditions[] Description Conditions appear in the status as a record of state transitions on the ClusterServiceVersion Type object Property Type Description lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' 3.1.437. .status.requirementStatus Description The status of each requirement for this CSV Type array 3.1.438. .status.requirementStatus[] Description Type object Required group kind message name status version Property Type Description dependents array dependents[] object DependentStatus is the status for a dependent requirement (to prevent infinite nesting) group string kind string message string name string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.1.439. .status.requirementStatus[].dependents Description Type array 3.1.440. .status.requirementStatus[].dependents[] Description DependentStatus is the status for a dependent requirement (to prevent infinite nesting) Type object Required group kind status version Property Type Description group string kind string message string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/clusterserviceversions GET : list objects of kind ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions DELETE : delete collection of ClusterServiceVersion GET : list objects of kind ClusterServiceVersion POST : create a ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} DELETE : delete a ClusterServiceVersion GET : read the specified ClusterServiceVersion PATCH : partially update the specified ClusterServiceVersion PUT : replace the specified ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status GET : read status of the specified ClusterServiceVersion PATCH : partially update status of the specified ClusterServiceVersion PUT : replace status of the specified ClusterServiceVersion 3.2.1. /apis/operators.coreos.com/v1alpha1/clusterserviceversions HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty 3.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions HTTP method DELETE Description delete collection of ClusterServiceVersion Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterServiceVersion Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 202 - Accepted ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion HTTP method DELETE Description delete a ClusterServiceVersion Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterServiceVersion Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterServiceVersion Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterServiceVersion Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion HTTP method GET Description read status of the specified ClusterServiceVersion Table 3.17. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterServiceVersion Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterServiceVersion Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/clusterserviceversion-operators-coreos-com-v1alpha1 |
18.12.12. Limitations | 18.12.12. Limitations The following is a list of the currently known limitations of the network filtering subsystem. VM migration is only supported if the whole filter tree that is referenced by a guest virtual machine's top level filter is also available on the target host physical machine. The network filter clean-traffic for example should be available on all libvirt installations and thus enable migration of guest virtual machines that reference this filter. To assure version compatibility is not a problem make sure you are using the most current version of libvirt by updating the package regularly. Migration must occur between libvirt installations of version 0.8.1 or later in order not to lose the network traffic filters associated with an interface. VLAN (802.1Q) packets, if sent by a guest virtual machine, cannot be filtered with rules for protocol IDs arp, rarp, ipv4 and ipv6. They can only be filtered with protocol IDs, MAC and VLAN. Therefore, the example filter clean-traffic Example 18.1, "An example of network filtering" will not work as expected. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-limitations-filters-network-subsystem |
Chapter 67. JmxTransSpec schema reference | Chapter 67. JmxTransSpec schema reference The type JmxTransSpec has been deprecated. Used in: KafkaSpec Property Property type Description image string The image to use for the JmxTrans. outputDefinitions JmxTransOutputDefinitionTemplate array Defines the output hosts that will be referenced later on. For more information on these properties see, JmxTransOutputDefinitionTemplate schema reference . logLevel string Sets the logging level of the JmxTrans deployment.For more information see, JmxTrans Logging Level . kafkaQueries JmxTransQueryTemplate array Queries to send to the Kafka brokers to define what data should be read from each broker. For more information on these properties see, JmxTransQueryTemplate schema reference . resources ResourceRequirements CPU and memory resources to reserve. template JmxTransTemplate Template for JmxTrans resources. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-jmxtransspec-reference |
8.4.7. Working with Yum Cache | 8.4.7. Working with Yum Cache By default, yum deletes downloaded data files when they are no longer needed after a successful operation. This minimizes the amount of storage space that yum uses. However, you can enable caching, so that the package files downloaded by yum stay in cache directories. By using cached data, you can carry out certain operations without a network connection, you can also copy packages stored in the caches and reuse them elsewhere. Yum stores temporary files in the /var/cache/yum/USDbasearch/USDreleasever/ directory, where USDbasearch and USDreleasever are Yum variables referring to base architecture of the system and the release version of Red Hat Enterprise Linux. Each configured repository has one subdirectory. For example, the directory /var/cache/yum/USDbasearch/USDreleasever/development/packages/ holds packages downloaded from the development repository. You can find the values for the USDbasearch and USDreleasever variables in the output of the yum version command. To change the default cache location, modify the cachedir option in the [main] section of the /etc/yum.conf configuration file. See Section 8.4, "Configuring Yum and Yum Repositories" for more information on configuring yum . Enabling the Caches To retain the cache of packages after a successful installation, add the following text to the [main] section of /etc/yum.conf . keepcache = 1 Once you enabled caching, every yum operation may download package data from the configured repositories. To download and make usable all the metadata for the currently enabled yum repositories, type: yum makecache This is useful if you want to make sure that the cache is fully up to date with all metadata. To set the time after which the metadata will expire, use the metadata-expire setting in /etc/yum.conf . Using yum in Cache-only Mode To carry out a yum command without a network connection, add the -C or --cacheonly command-line option. With this option, yum proceeds without checking any network repositories, and uses only cached files. In this mode, yum may only install packages that have been downloaded and cached by a operation. For instance, to list packages that use the currently cached data with names that contain " gstreamer " , enter the following command: yum -C list gstreamer* Clearing the yum Caches It is often useful to remove entries accumulated in the /var/cache/yum/ directory. If you remove a package from the cache, you do not affect the copy of the software installed on your system. To remove all entries for currently enabled repositories from the cache, type the following as a root : yum clean all There are various ways to invoke yum in clean mode depending on the type of cached data you want to remove. See Table 8.3, "Available yum clean options" for a complete list of available configuration options. Table 8.3. Available yum clean options Option Description expire-cache eliminates time records of the metadata and mirrorlists download for each repository. This forces yum to revalidate the cache for each repository the time it is used. packages eliminates any cached packages from the system headers eliminates all header files that versions of yum used for dependency resolution metadata eliminates all files that yum uses to determine the remote availability of packages. These metadata are downloaded again the time yum is run. dbcache eliminates the sqlite cache used for faster access to metadata. Using this option will force yum to download the sqlite metadata the time it is run. This does not apply for repositories that contain only .xml data, in that case, sqlite data are deleted but without subsequent download rpmdb eliminates any cached data from the local rpmdb plugins enabled plugins are forced to eliminate their cached data all removes all of the above The expire-cache option is most preferred from the above list. In many cases, it is a sufficient and much faster replacement for clean all . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Working_with_Yum_Cache |
4.3.4. Forked Execution | 4.3.4. Forked Execution Among the more challenging bugs confronting programmers is where one program (the parent ) makes an independent copy of itself (a fork ). That fork then creates a child process which, in turn, fails. Debugging the parent process may or may not be useful. Often the only way to get to the bug may be by debugging the child process, but this is not always possible. The set follow-fork-mode feature is used to overcome this barrier allowing programmers to follow a a child process instead of the parent process. set follow-fork-mode parent The original process is debugged after a fork. The child process runs unimpeded. This is the default. set follow-fork-mode child The new process is debugged after a fork. The parent process runs unimpeded. show follow-fork-mode Display the current debugger response to a fork call. Use the set detach-on-fork command to debug both the parent and the child processes after a fork, or retain debugger control over them both. set detach-on-fork on The child process (or parent process, depending on the value of follow-fork-mode ) will be detached and allowed to run independently. This is the default. set detach-on-fork off Both processes will be held under the control of GDB. One process (child or parent, depending on the value of follow-fork-mode ) is debugged as usual, while the other is suspended. show detach-on-fork Show whether detach-on-fork mode is on or off. Consider the following program: fork.c #include <unistd.h> int main() { pid_t pid; const char *name; pid = fork(); if (pid == 0) { name = "I am the child"; } else { name = "I am the parent"; } return 0; } This program, compiled with the command gcc -g fork.c -o fork -lpthread and examined under GDB will show: gdb ./fork [...] (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) Detaching after fork from child process 3840. 9 if (pid == 0) (gdb) 15 name = "I am the parent"; (gdb) 17 return 0; (gdb) print name USD1 = 0x400717 "I am the parent" GDB followed the parent process and allowed the child process (process 3840) to continue execution. The following is the same test using set follow-fork-mode child . (gdb) set follow-fork-mode child (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) [New process 3875] [Thread debugging using libthread_db enabled] [Switching to Thread 0x7ffff7fd5720 (LWP 3875)] 9 if (pid == 0) (gdb) 11 name = "I am the child"; (gdb) 17 return 0; (gdb) print name USD2 = 0x400708 "I am the child" (gdb) GDB switched to the child process here. This can be permanent by adding the setting to the appropriate .gdbinit . For example, if set follow-fork-mode ask is added to ~/.gdbinit , then ask mode becomes the default mode. | [
"#include <unistd.h> int main() { pid_t pid; const char *name; pid = fork(); if (pid == 0) { name = \"I am the child\"; } else { name = \"I am the parent\"; } return 0; }",
"gdb ./fork [...] (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) next Detaching after fork from child process 3840. 9 if (pid == 0) (gdb) next 15 name = \"I am the parent\"; (gdb) next 17 return 0; (gdb) print name USD1 = 0x400717 \"I am the parent\"",
"(gdb) set follow-fork-mode child (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) next [New process 3875] [Thread debugging using libthread_db enabled] [Switching to Thread 0x7ffff7fd5720 (LWP 3875)] 9 if (pid == 0) (gdb) next 11 name = \"I am the child\"; (gdb) next 17 return 0; (gdb) print name USD2 = 0x400708 \"I am the child\" (gdb)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/gdbforkedexec |
Security hardening | Security hardening Red Hat Enterprise Linux 8 Enhancing security of Red Hat Enterprise Linux 8 systems Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/security_hardening/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/proc_providing-feedback-on-red-hat-documentation_configuring-gfs2-file-systems |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/making-open-source-more-inclusive |
Chapter 6. Mirroring data for hybrid and Multicloud buckets | Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . | [
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/mirroring-data-for-hybrid-and-Multicloud-buckets |
Chapter 1. Validating an installation | Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.29.4 control-plane-1.example.com Ready master 41m v1.29.4 control-plane-2.example.com Ready master 45m v1.29.4 compute-2.example.com Ready worker 38m v1.29.4 compute-3.example.com Ready worker 33m v1.29.4 control-plane-3.example.com Ready master 41m v1.29.4 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Cluster List list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster . | [
"cat <install_dir>/.openshift_install.log",
"time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"",
"oc adm node-logs <node_name> -u crio",
"Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"",
"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4",
"oc get clusteroperators.config.openshift.io",
"oc describe clusterversion",
"oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'",
"{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}",
"oc adm upgrade",
"Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get secrets -n kube-system <secret_name>",
"Error from server (NotFound): secrets \"aws-creds\" not found",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'",
"oc get pods -n openshift-cloud-credential-operator",
"NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.29.4 control-plane-1.example.com Ready master 41m v1.29.4 control-plane-2.example.com Ready master 45m v1.29.4 compute-2.example.com Ready worker 38m v1.29.4 compute-3.example.com Ready worker 33m v1.29.4 control-plane-3.example.com Ready master 41m v1.29.4",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/validation_and_troubleshooting/validating-an-installation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.